Router with good gaming performance

I want to buy an OpenWRT capable router with good gaming performance - low pings. I need some LAN Ethernet ports - 3 at minimum, up to 9 maximum. WiFi is required for mobile devices and a laptop, but it doesn't have to be top performance - the important devices will be connected via Ethernet. The ISP will provide an GPON ONT, I will connect to it via Ethernet. Depending on the plan, the download speed will be between 150 Mb/s and 1 Gb/s (have yet to choose the plan). I'm open to any suggestions, but I want a device with good OpenWRT support. My plan was to buy a router and wireless AP, then add a Gigabit switch, since I found no consumer grade router with more than 4-5 Ethernet ports.

At 1 GBit/s effective WAN speed, your choices are basically between x86 and top-end mvebu.

2 Likes

What slh said

2 Likes

Don't bother with anything other than x86 and a smart switch, you will regret it. high end ARM devices currently will not do reliable SQM (bufferbloat/ping management) above say 300 or 400Mbps. You want a mini PC with dual Intel NICs at least, and either a Cisco SG350-10 (more features, fewer ports) or a zyxel gs1900-24 (good features, more ports) both of which are reasonably priced online.

For WiFi, use either an existing OpenWrt router if you have one, put it in dumb AP mode, or buy a specialized AP like a TP-Link EAP series, or a ubiquiti device or something.

2 Likes

As far as ping goes, ar71xx and mvebu are fairly good. Avoid ipq806x.

x86 can be good but not always.

An ar71xx / ath79 device isn't going to be able to manage much more than a few hundred Mbps throughout without SQM, only a couple hundred Mbps with SQM.

At rates over a few hundred Mbps, x86 would be my first choice, with @slh's extensive experience with the high-end mvebu devices (with which I have no personal experience with) being well worth consideration. The flexibility and ability to replace/upgrade individual components is a big advantage for me of separate switch(es) and AP(s). Another advantage is that "internal" connectivity that relies on the switch is still present if the "router" is rebooted.

I assume high-end mvebu basically means Linksys WRT3200ACM, WRT32X, maybe Omnia Turris?

aye
these do gbit sqm barely
pc will cost about the same and is much faster

...but then again, you very rarely need sqm if you have 100+ :wink:

Not true at all. It's easily possible to destroy a VOIP call over a gigabit link by turning on a large download or torrent or copying a big video to a NAS. Furthermore I've found with gigabit WAN I now need to set up individual SQM instances on some of my LAN links. For example I have a powerline link between my server closet and a media PC, and it can only handle somewhere between 30 and 100Mbps fluctuating. I put an SQM instance on the far end of the link to limit it to 30Mbps which is plenty for Fubo streaming, and now my IPTV performance is way more stable. I was getting up to 2 seconds of bufferbloat on that link during speedtests before, 20ms after.

the need for SQM is a function of the ratio of your ability to send/receive data outside the bottleneck point, to the bandwidth of the bottleneck point. If that goes over 1 you have bufferbloat regardless of what the absolute speeds are.

2 Likes

Again, I've proven you wrong several times and you claim for some unknown reason that it's a requirement. It will balance itself if you're on a residential connection in ~99% of all cases and do note that I actually wrote very rarely. What you're claiming regarding your streaming box doesn't make any sense unless you have some broken hardware on your network (yes, I would consider 2000ms as broken on a LAN).

The math of fifo queues says that if the far end of the queue sends at rate X and your network sends at rate Y greater than X then you will increase the length of the queue in time. There is no way to avoid this. Your assertion that 100Mbps rarely needs SQM is the same as saying that LAN are rarely able to send more than 100Mbps. This is obviously false as plenty of people have say 3 machines with gigabit Ethernet and so are capable of slamming 3000 Mbps into the 100Mbps queue with no problem. Now if you were saying this about 10Gbps Ethernet uplinks I might agree with you but 100Mbps you can easily find tens of threads here in the last month alone with people complaining about bufferbloat on their 100Mbps plus link and wanting faster hardware because of it.

Some people get high speed links with DOCSIS 3.1 and that has queue management built in, so it's possible you don't need SQM but this is hardly universal.

1 Like

Even with DOCSIS 3.1, there are still gains to be had. As pointed out above, line speed has very little to do with the need for controlling queue-based delays. From http://www.dslreports.com/speedtest/results/bufferbloat

Note that there are lines in the 10 Mbps range and below with well over 250 ms of what appears to be buffering delay, and many with around 1000 ms (a full second).

image

1 Like

@dlakelan
Yes, the 100mbit connection will bottleneck and get congested however clients will to some extent throttle themselves but again that's not really an issue if you have a somewhat similar connection speed. Take your example for instance, those boxes unless you're running some kind of DDOS software will eventually back off (slow down) under normal conditions or your switch will. If they all were on gbit you probably wouldn't notice it on your network at home.

Office #1
DSL 21/2mbit (actual speed)... Upload bottlenecks easily and need shaping to keep voip quality decent.

Office #2
Cable 100/100 (actual speed), can "max out" upload and still maintain good quality on voip without any QoS at all. I do doubt I'd see the same results with 100+ clients for instance.

Both use 1000mbit internally

What it more or less boils down to is how many clients you have hammering the slow link and if you have devices/applications that ignores any kind of flow control and/or backing off.

2 Likes

I agree with this 100% but the other factor is the behavior of the buffers. A large unmanaged buffer looks like a perfect link because you put packets into it and they never get dropped, eventually they get delivered. All it takes is for a buffer to fill for 100ms and you've destroyed your voip call as first you have a bunch of garbly audio, and then as the jitter buffers fill, people start talking over each other and can't hold a conversation.

0.1 seconds at a gigabit is 12.5 megabytes, which is basically 1 high quality DSLR camera image, or 10 seconds of video on a phone... My smartphone has tens of gigabytes of storage, I could easily walk out to a soccer game with my kids and take a total of 20 mins of video, or maybe a thousand sports photos. I upload them to my computer, it automatically syncs them to a cloud storage... the sync program isn't used to having a gigabit and doesn't throttle itself, the link goes saturated for a second at a time and call audio simply drops out or becomes utterly useless. I'm telling you it happens! if you don't manage QoS in some way.

The thing that SQM and similar technologies (fq_codel etc) do is that they force connections to throttle quickly and they throttle the connections that are least important (if you use DSCP properly for example) so that long-running downloads throttle before games and VOIP etc. Without this technology even people with 100Mbps symmetric or more will easily have difficulty carrying on a conversation over VOIP when some background process fires up and dominates the link for seconds or tens of seconds at a time. Someone else in a current thread mentioned watching gaming video streams at 50Mbps (I guess that the computing horsepower to compress the stream in realtime doesn't exist so they're sending rather raw video). A person with 3 gaming kids in their house could easily saturate a 150 Mbps link with that while trying to have a phone conversation and it will suffer a lot.

2 Likes

Since Cable symmetric 100/100 is only available very recently, this is undoubtedly due to the DOCSIS 3.1 standard requiring the Cable modem to do the smart queue management in the modem. So yeah, if someone else does it then you don't need to do it in your router (though your router might do a better job).

EDIT: it's even more the case if you're using VOIP provided by the cable provider, as they can go out of their way to identify and prioritize their own VOIP service over their network, thereby privileging the VOIP traffic in both directions with DSCP etc. Try using a VOIP provider from a 3rd party like Anveo or voip.ms or some such thing and you'll likely find very different performance.

You never "need" sqm, but if latency under load is important than sqm will help no matter the wan-linkspeed. I personally observed multiple 100ms latency under load increases when saturating >100 Mbps links, which for my taste are too much and hence I use a traffic shaper to keep latency under control. IMHO the sticking point is that no ISP I know of looks at keeping latency under load controlled with the same priorety than I do, if only for the simple economics of downstream bufferbloat control getting expensive real quick if an ISP needs to say shape say a couple of hundresd users at the BNG or DSLAM level.

How can you prove @dlakelan's requirement wrong, I believe that such requirements need to be specified individually for each network as it is mostly a policy matter?

Well, a lot of devices are "broken" in that they offer under-managed and over-sized buffers, but mostly this is by design (as static buffer-sizing that works reasonably well for the top speed will be too much at lower speeds). traffic shaping offers a way to keep tight latency control in spite of broken-by-design hardware.

Well, but any latency sensitive use during those epochs when the queueing transiently goes awry will suffer, that mostly includes VoIP and on-line gaming, as well as to a lesser degree video streaming.

Well, at a 1000/100 reduction as in your example a single device is sufficient to inflict noticeable latency damage, as demonstrated amply by users that observe gaming issues while concurrently torrenting. I do not doubt your given examples (and agree that they show that sqm is not a generic requirement for everybody), but I want to note that these observations do not seem to generalize to all users.

2 Likes

We're talking about residental connections (and networks) with a handful of clients not a campus or office that's using a heavily "overbooked" connection. Again, I'm not saying that SQM isn't needed however it's extremely misleading claiming that you'll need it or otherwise your connection will be useless which is more or less the gist everytime SQM gets mentioned. Please be reasonable and try to apply it to the real world. Yes, you will potentially get a spike in latency which is usually very minor at these speeds (we're still on the residential networks topic) but might (and probably will) escalate quite a bit when you have "a lot" of bandwith hungry clients. You can definitely trigger such scenarios but in the real world they are rare even if you use for instance P2P heavily.

Bottom line is, SQM has it's place but the zealotry is getting a bit out of hand.

Well, the topic seems to request "good gaming performance" so IMHO it is clear that latency-under-load control is important to the OP. And IMHO the issue is independent of the type of network or even the number of devices, as long a network is operating often enough under link saturating conditions buffer-management (or at least proper buffer sizing) becomes important.

I agree, "need" is strong but in the context of this thread I would claim proper traffic-shaping/buffer-management is recommended.

Sure, the policy question for the residential networks operator is simply how well such spikes can be tolerated, especially on-line FPS players seem to have very little tolerance (judging from the number of threads). I also note that many "streaming" video suppliers actually do not stream, but rather use DASH which instead of more or less iso-synchronous transferring at a video stream's bitrate, will cyclically dump packets at maximum achievable rate which will cyclically cause link saturation, this alone can noticeable decrease the perceived quality of VoIP and introduce unwanted "lagging" in games. Now whether a user cares or not is independent of whether it happens at all :wink:

With this I disagree, these scenarios are much more common than one naively assumes unless one never saturates one's link long enough.

I guess I see what you mean, but I also believe that in the context of this thread your comment is misplaced as the OP explicitly asked for:

2 Likes

Yes you can observe this behavior nicely in netdata, you will see every few seconds a spike of bandwidth at full link capacity lasting a second or so. So if you like your voice conversations to drop every third word it's easily achievable :stuck_out_tongue_winking_eye:

The OP asks for a router with good gaming performance for between 100Mbps and 1000Mbps, the requirement for good gaming performance means SQM is needed as good performance means latency fluctuations less than say 20ms most of the time and game packets never dropped when you could drop another packet instead.

I suspect strongly that @dizzy has experienced the benefits of DOCSIS 3.1 deployed well, or at least ISP VOIP traffic given a fast lane by the ISP. ISPs usually do monkey business with packets to make their in house VOIP service as high quality as possible. Games and third party VOIP don't get that advantage.