I was hoping someone with a fast connection could give me a ballpark number on where this router caps out with SQM running a basic Openwrt install with nothing fancy running other than cake + layer cake. I'm mostly interested in wired performance.
This router has been sitting at $100 for a refurb model on Amazon for a while now and I'm wondering if it's a good buy.
Can you recommend anything faster? I was hoping for more. I just want a working QoS at higher speeds. I don't want to be stuck constantly shopping for routers because of CPU limitations as my connection inevitably speeds up.
I have read about some Asus models that have found a way to offload QoS allowing for near gigabit speeds. Netduma also claims to be able to QoS at gigabit speeds on their rebadged Netgear routers. These options are $180+ but I wonder if it's worth paying the premium to avoid another router purchase for at least three years.
It sounds like you are a good candidate for a mini PC based router. Buy the PC and a decent managed switch. Lots of options on AliExpress, or Amazon. Ideally you get a Celeron 4000 series and 4 or more Intel gigabit nics.
Awesome. I can't wait. Fortunately by the time they're here in my area, hardware will be much faster at the same price
Also, one suspects that the techniques needed to handle low latency for 10Gbps connections are somewhat different than what we're doing today with Cake and HFSC and things. One reason is that for example movies are not going to get longer, and screens are not going to get dramatically higher resolution... so for example while we encode movies at 10Mbps today, maybe soon we'll encode at 25Mbps... but I don't expect 1Gbps movie encoding any time soon. Therefore the number of people in your network who need to be all trying to stream at the same time in order to cause congestion will go from say 5-10 at 200Mbps to several hundred at 10Gbps...
Also low end computers like the ones we use for everyday desktops will be unable to slam data across the network at 10Gbps, so it'll take several computers in your house working at maximum capacity doing file uploads/downloads to saturate 10Gbps... Probably WRR or DRR implemented in hardware on a 10Gbps switch will be sufficient for almost everyone to keep latency under control on the local link (let's not talk about how much we'll be hating our ISPs for their overselling the backhaul though )
Sure, but even with weighted round robin, if you put DSCP on your game packets, your latency will be fine. Think of it this way, at 1500 bytes, sending a packet takes 1.2 microseconds. So if you use more than 1.2 microseconds computing, you would be better off just sending it. That means you have just 3000 cycles to do absolutely everything, including all copying and and driver stuff and interrupts and kernel ticks and whatever. Basically at this point it all needs to be done in dedicated asic hardware.
At 10GbE, saturated with 1500 byte packets it's 83k packets per second. WRR with say 128:1 ratio, you might send 650 bulk packets before your game packet gets a chance. 650 packets takes 0.8 ms so even braindead QoS will be ok
maybe, I was calculating on my phone so might have goofed. let me rework the numbers and confirm
yep, 833k packets per second.
I think it'd be rare that you'd have legit, non DoS traffic flood with small frames. I mean, the main reason to flood the bandwidth is big uploads as @fuller mentioned, syncing to a cloud or some such thing.
I realized that my calculation for WRR was wrong though, so I think it mitigates. Suppose you have a WRR queue system. For efficiency you might let the queue send N packets at weight 1, and kN packets at weight k... but typically N is 1 at gigE in these hardware switches, in other words it does 1 packet from the low prio queue, 32 packets from the mid, and 128 from the top or something. At 10GE let's suppose it's 10 for weight 1 just for efficiency. So the low priority queue can send 10 packets = 12us, then say 320 packets in the mid prio queue = 0.4ms, then the high priority queue can send 1280 packets.
So your game packet arrives, it waits around 1ms, and then every game packet from every player on your LAN, up to 1280 packets gets sent all at once... and then we're back to 10 packets at a time from the low priority queue... obviously adjustable weights might be useful here, but the main point is low latency is easily possible in typical switch hardware that can process millions of packets per second due to specialized hardware.
Ideally you'd have some more sophistication in these switches so you can do things like reprioritize / reclassify on ingress, limit the total bandwidth at high priority (so it's WRR but with policing on the ingress to the high priority queues to avoid starvation), a few things like that. Some of the high end home switches or low end small business switches can do this already. ZyXel has a pretty simplistic version, Netgear you can set up rules to DSCP tag and soforth... Cisco is probably most sophisticated I think.
It's normal for my ISP to offer free speed upgrades over the years so I wanted to be ready for at least one more upgrade. The cheapest plan in my area went from 25Mbps -> 50Mbps -> 100Mbps -> 150Mbps. You guys are having a very interesting discussion about 10GE but realistically I wouldn't subscribe to a speed plan like that even if it was available. I just can't see myself ever needing more than gigabit.
Setting up a mini pc sounds a little daunting for someone with my skill level and looking at the prices it might hit $300. I will probably just use that $100 wrt32x as yet another stop gap until consumer routers get solid CPU upgrades.
Maybe a regular here could manage a thread where users could share their router model and the maximum SQM speeds with a dslreports test or something similar. I think it would be an amazing tool for the community to use for planning their next purchase.
as for the routing/shaping speed testing etc. @Questionable mentions, it'd be nice to have some speed estimates. I have spent some time trying to do that, and made a speed monitoring script, but i've only had a few people donate some samples, mostly from idle links. The script seems to work ok, but the next step requires people to run the thing and donate the data. Also it might be good for me to downsample the data somewhat so it can be run for longer without overflowing available storage on limited routers... So it's not quite ready for prime time.