Router for 1-2gbps?

I doubt that you've read the small print of the contract, unless you are on a business contract. Basically, no provider is going plan their residential network without some (huge) amount of overbooking. Nowadays the exact share ratio is unlikely to be mentioned by most providers - and there could be even no share-ratio sizing rule.
If you don't believe me - do an RFC2544 test for the maximum supported duration and check if it passes or your connection is going to be temporally limited/suspended. The price range of the services with CDR/SLA is completely different.

When the network is highly loaded, the author may not receive the 2Gbps. The contract is likely stating: "up to 2Gbps". Without knowing the exact CDR, no reasonable QoS can be configured. The best option should be let's say some small CDR to be dedicated to the real-time and the streaming services traffic.Even in this case the router won't know when the network is being congested (in a regular network) in order to start queing/dropping the traffic if the CDR is being set to 2Gbps. Adding some more specific applications used by the tenants will complicate further the traffic classification/configuration, so there is no point using some high-end device with constant dedication. The option some "up to 500Mbps" of Internet connectivity to be delivered "as is" could be more reasonable.

This is a valid point. Depending on where the OP resides, this should not be overlooked. Many ISPs in the US who provide residential service have fine print that often includes verbiage regarding sharing a connection. This can often result in termination of service, at best. Not sure if the same holds true in other countries, but again, it should be considered by the OP.

Just to add additional info in a different direction. If you want to split the bandwidth fairly between each unit you can run 4 class QFQ and then put cake or even just fq_codel on each class.

With 2 Gbps aggregate you don't necessarily need traffic shaping, it might be sufficient to do this kind of fair queuing.

I'm running QFQ on my desktop machines with unequal weights to prioritize stuff like gaming or vidconf over NFS file sharing. It works fine.

2 Likes

You are new around here, no? :wink: This, doing competent traffic-shaping/AQM/QoS without knowing the CDR is exactly what we do and if I might add to pretty good effect.
For most links the achievable rate is pretty much static and e.g. sqm-scripts/luci-app-sqm can be used with fixed traffic shaper settings, and for truly variable rate links like LTE/5G there are cake-autorate and sqm-autorate that constantly track the experienced delay and adjust the shaper to keep latency acceptable...

In practice all of these work well in spite of:
a) lack of reliable information about a link's true CIR and PIR
b) the fact that ingress shaping is happening on the wrong side of the link and hence is less precise/strict than egress shaping.

This might not work perfectly all of the time for e.g. performing truly critical services* from one's home link but it generally works well enough to make a noticeable difference in network responsiveness

BTW this does not really change if you artificially split a 2GB link into four 500 Mbps links, only the likelihood decreases a bit to see link saturation, but honestly most links are idle most of the time anyway, daily average usage still tends to be in the high single to low double digit Mbps range (according to verbal communications with ISP NOC personel in germany).
Personally, I

*) Telesurgery, on-line control of critical machinery, these kind of things that are anyway not really suited for usage over a best-effort network like the internet.

2 Likes

Yepp, there is a whole continent of legal questions to explore, like will offering such a service make you legally an ISP? If yes, will you be bound to the same legal requirements as an ISP? If not, how is responsibility of potential illegal uses of the internet access "distributed" if no detailed logs exists and if they do exist? ...

However, this is something best discussed with a lawyer specialized for that field in your jurisdiction and not this forum. :wink: I think that in Germany, for WiFi participating in the freifunk network might help ameliorate the legal issue to some degree (but even in Germany, talk to a lawyer).

For egress I certainly would try to press BQL into service avoiding the need for a costly upstream traffic shaper (but that requires the uplink capacity to be close enough to the interface speed).
For downlink I doubt that qfq will see enough even transient queueing to do its thing... if you feed a 2.5 Gbps interface with a 2.0 Gbps internet downlink there will never be queue buildup at the 2.5 Gbps interface (at least not one purely driven by ingress traffic rates).

Other than that I would do something similar with a first level of fairness queueing between the four units giving each access to 0.5-2.0 Gbps of internet capacity depending on concurrent usage between the units. IMHO getting an equitable share of a larger pie seems much nicer than being throttled to a static share of capacity... also essentially all ISP figured out that giving normal usage rates over-subscribing a link is a decent cost optimization with very limited impact on the users internet experience.

My point is that the bandwidth of the 2Gbps service will be far-more unreliable than the bandwidth of the 500Mbps one. - e.g. in the first case you will have let's say up to 20% of the provider's total bandwidth occupied only by you. If you have the 500Mbps service - you can pretend for up to a 5% of the total bandwidth if we assume the link is 10Gbps. Let's now assume it (the core uplink) runs up to 70-80% average load. In the first case 66-100% of the free bandwidth and 16-25% at the latter case could be pretended.So it's less likely your traffic to be heavily impacted by other users if the average service in the area is less than 2Gbps for a single user. Also the home Internet traffic could be bursty in nature, so in most of the time, the 2Gbps could be achievable in short term, but the bursts will introduce some delay or packet-loss, depending the way the provider limits the service speed. I doubt you can adjust the sahper in real-time on a per-packet basis, achieving no additional jitter and packet-loss to the important traffic, so the initial impact will be felt.

I didn't say that there would be no point or no improvement if doing QoS, but the fact that some of the tenants is likely to use a specific application which could be impacted and must go into the QoS - so you must play as a "service provider" customer support and process trouble tickets. :slight_smile: If some of them are working from home and you are preventing the Internet providers to install services on your premisses it could be an issue for the respective tenants - e.g. it must be handled by the terms and conditions of the contract. If 4/5G, the provider can do its own behind-the-scenes "magic" to keep the quality of experience of the users depending the network load, and the resource allocation, so it could be somehow different when compared to a wired home-grade connection.

1 Like

For end-user links that will not happen... at least not at a frequency that it becomes an issue.

Here is the thing, for most internet use-cases 500 Mbps, let alone 2 Gbps are massive over-kill, or put differently going from say 100 Mbps (which already is generous for today's use-cases*) to 500 Mbps in all likelihood will not result in a perceivable improvement (measurable improvement likely, but not something that feels immediately noticeably faster).
The key "trick" here however is competent AQM and scheduling on a link (independent of link capacity). E.g. with sqm on my boring 105/37 Mbps link I can download multi GB updates at shaper speed until other computers also require some capacity at which point cake will make sure that the bulk download gets throttled to make room for the other flows... and once these flows are done the bulk downloads will be able to scale up to shaper capacity again...

The answer is it depends, in my case the low-latency and jitter performance under saturating loads of my ISP is not terrible, but I still can improve it noticeably by running cake qdiscs on ingress and egress. If there is no queueing the additional delay from piping data through cake is not really noticeable (Linux expects a qdisc, so the question is not no qdisc versus qdisc, but just which qdisc) sure you can measure a slightly higher CPU load, but any potential increase in static delay and jitter is mote than compensated by higher responsiveness under load; occasionally it is the outliers that affect noticeable performance most, and getting these under better control can have a bigger effect than having a slightly lower minimum.

But why? It turns out that in our example first doing fair sharing between units and later fair sharing between internal hosts (and flows on top of that), will remove quite a lot of justification of traditional prioritizing QoS methods...

Sure, but that is a legal issue best discussed in other venues than this forum, this is not one of the core competences around here (and even if, relaying on legal opinions out of the internet sounds like a recipe for failure).

Alas, many users on mobile links are not impressed by their ISP's Quality of Experience, this discrepancy between user-desire and ISP-delivery is what sparked the development of the different *-autorate implementations...

*) Sure there is the bulk-download issue, that e.g. happens for gamers being stuck behind waiting for a multi dozen GB download for a game update before they can start playing, but typically really large downloads are not "blocking" other use-cases of an internet link, at least if that link is configured properly to not let the download degrade responsiveness of the whole link.

@kaivorth I apologize for being one of several who took this thread deeper into the weeds (my contribution being the callout to validate ISP ToS) than it needed to be, perhaps.

Anyway, this response is assuming you have your legal T's dotted and I's crossed and have a technical plan in place for fair queueing and shaping... and let's say now you just need the hardware to do it :slight_smile:


From a residential consumer side of this, I can't see anything but an x86 box being able to handle the speeds you're talking here. Sure, there is very purpose-built hardware for ISP and/or data-center usage. But you're talking multi thousands of dollars at that point.

So with x86, you've got some options. There will be the Intel camp and the AMD camp, largely. I personally went with Intel when I built my x86 router because 1) I've never had a bad Intel CPU (can't say the same of AMD) and 2) I planned to use an Intel multi-port PCIe NIC.

I picked up a used Dell Optiplex 7010 from ebay. I think I paid ~$125 for it at the time. Came with an i7 @ 3.4Ghz quad-core HT, 8GB RAM and a 500GB HDD, which I dumped posthaste and replaced with a cheap-o SSD. If I'm still remembering correctly, it can take 2 (if not 3) PCIe half-height cards. So, I used one of those slots to drop in a quad-port 1Gb Intel NIC.

4-5 years later, I'm still running solidly with it. It is now running OpenWrt natively--though I was running ESXi on the bare hardware with OpenWrt as a VM successfully for a time. I can't say how long the power supply, or any other component in it, is going to keep running. But if something breaks, I'll pick up another one from ebay and fix it up.

If you're willing to get hands-on with the hardware, I recommend the general approach I went with. The ability to customize it and make it exactly what you want it to be is empowering. There are other more "plug and play" type boxes out there with 2.5GbE ports that might have enough x86 horsepower to do the job too. Names like "Qotom", "Protectli", etc. come to mind, but I have no personal experience with them and don't know how the Celeron J procs compare to i5/i7.

I generally would stay away from i3, though. From personal experience with several i3 procs, I vowed to never again own one. I would go i5 at a minimum.

All of this said, x86 and low-power consumption are generally at odds with each other. But, that it is all relative anyway. Compared to a 2 core ARM CPU, even a 2 core x86 CPU is going to use more power. To provide shaping and fairness at 2Gb isn't a small ask and therefore can't use a "small" CPU, either. I see a fair number of people in this forum as of late trying to shave a few watts off their daily consumption. While I appreciate the spirit of it, often times the human energy and cost expended in the effort of achieving the savings is greater than the reward, IMHO.

Back to my recommendation, a similar Optiplex 7010 can be had for around $70. There may be other SFF equivalents that are more popular in other countries. HP and Lenovo make some SFF equivalent units that often go on ebay off-lease and have expansion capabilities for half-height PCIe cards, like my Optiplex.

Hopefully this is remotely helpful for you and if you want to dive into particulars around my build, let me know!

2 Likes

@kaivorth As a follow-up to my prior book post, one additional callout I would make is this...

If you are looking for a genuine Intel NIC, make sure you buy from a reputable vendor. I would especially caution from purchasing it on Amazon without making sure the seller is legit. There is a shady market for selling non-legit Intel branded equipment and the NICs are no exception. That's as deep into this as I'll go here, for fear of taking your thread off track again. Just Google "fake intel nic" for more background.

1 Like

If you do the x86 build I mentioned, you should be able to fall within the $200-$300 range. Dual, multi-gig NICs are going to be the most expensive part of that, though.

Could you also expand on this for us? What WiFi speed are you hoping to provide each tenant?

A tiny form factor (Lenovo, HP, Dell, etc) used with a i5/i7 *T CPU consumes only 5W extra over wrt3200acm when idle: probably less if run in powersave mode. If you can put it in a cold place (basement?), the fan will run slower and the CPU will last a few additional years. Then you can use a USB Ethernet dongle (Realtek) for 1Gbps and a cheap managed switch (for isolation with VLANs) or multiple dongles: these tiny have a lot of USB3 ports (mine has five of those). There are now 2.5Gbps Realtek based ones.

1 Like

Re: the Intel CPU, the BIOS on these small or tiny form factor units should allow HT (hyper-threading) and burst speed to be disabled (separate toggles). So, experimenting with disabling HT and/or burst can also positively affect the power consumption and, by extension, heat. :slight_smile:

This does not make much sense. In typical router and firewall SOHO scenarios you have a more or less stable load on the networking equipment that’s near to idle.

But: hyper threading, simultaneous multithreading and clock scaling up to turbo frequencies helps for performance in the short bursts of high load.

If you like to save power in typical SOHO environments: look for low idle static power usage, that is drawn 24/7.

2 Likes

I'm only speaking from my hands-on experience with my x86 hardware. Just for giggles (and a sanity check), I switched off TurboBoost in real-time on my router just now:
echo 1 | tee /sys/devices/system/cpu/intel_pstate/no_turbo

Running speed tests from an ethernet connected host and I'm getting ~940mbps both directions (I have 1Gb symmetric fiber and running SQM). Even with TurboBoost switched off, there is so much CPU headroom with my i7, it's not even funny. My router load average goes to ~0.35 at the 1Gb line-rate. Also, I see zero measurable effect on jitter even with TurboBoost off. I'm in the 0.10ms - 0.18ms range with TurboBoost on or off.

I don't disagree with you at all. I think my point is that these two things are not necessarily give or take. In other words, you can select a reasonable i5/i7 CPU with a low TDP (now called Processor Base Power) and still disable TurboBoost if it truly isn't needed. This gives you the low idle usage you called out, but also keeps the CPU from hitting higher temps during TurboBoost if it really has little measurable effect on performance (as I demonstrated in my particular use-case above).

But, should the OP need TurboBoost in their particular scenario--so let's say it does provide @kaivorth a measurable effect--then leave it enabled. No biggy. :slight_smile:

Turning off burst cpu clock scaling into turbo mode or turning off SMT will not substantially help decrease yearly power consumption in a scenario with 95% or more static load near to idle.

I would instead prefer building a low power energy efficient hardware setup and an efficient software configuration that matches the use case. Optimizing power draw for the 95% of the run time.

I'm content with not going back and forth with you on this. The data I've provided here for @kaivorth is backed up by my first-hand knowledge with the hardware I've detailed. :slight_smile:

It sounds like you have a different scenario/solution in mind with some pros that I've not addressed. I'm sure having you detail it here would be of benefit to @kaivorth and the rest of the community.

Ok so you agree that turning off SMT or clock scaling into turbo mode does not substantially help reducing power draw for routing, firewall or Wi-Fi equipment in a SOHO environment with constant load near to idle for 95% or more of the time.

This statement does not make sense to me in the discussed use case SOHO network equipment. For device heat I agree for the short expected bursts, but it should not matter as this is solved in the hardware design. For power draw I disagree for the whole proposal. I recommend to make use of existing hardware multithreading capabilities and cpu clock scaling into turbo mode for low latency and good overall user experience in the expected burst load use cases.

And I again recommend to save power and reduce heat production by optimizing the network setup for a low static (idle) power draw.

Also: your proposal was not about performance, but about power draw and heat. This is what I discuss.

Thanks for all the replies so far.

I'm narrowing down on a N5105, N6005, i3-1215U, J4125, J6413 (Can't find much on SQM on these)

I don't mind spending a little more if I really need to.

The Optiplex 7010 is cheap but I value something really small and clean.

1 Like

Here's a recent post of mine on one of the aliexpress mini-pcs:

1 Like

Oh yeah I was imagining feeding each apartment with 1Gbps and having just an upstream 2.5Gbps link. Downstream is a little more complicated. You can still do an IFB but you probably could benefit from HFSC with a class per apartment, and an overall max rate equal to DL aggregate rate.

But in any case it's possible to prioritize each unit rather than per IP which might not be particularly fair, for example if one apartment has 4-5 people and another has 1 the apartment with 1 might get just 1/6 of the bandwidth.