What is required to get cake to shape at 10Gbps @moeller0? Would a typical desktop computer say i7 manage that?
In Germany no, ISPs are required to publish an standardized information sheet in which they have to list a number of obligatory numbers including (separately for download and upload direction):
a) the maximal rate (does not need to be the true physical rate, but that rate the ISP is willing to guarantee)
2) the usually achievable rate
3) the minimal rate
Then the BNetzA (our regulatory agency) has a speedtests app and specific rules about each of the three numbers and what users need to be able to measure for the ISP to comply with regulations.
So no "up to" is not cutting it in Germany any more. ISPs need to give robust rate estimates (they can pick these numbers themselves) and can face financial consequences if they do not deliver on these contracted rates. The actual measurement procedure is a bit tedious* but the overall concept is pretty user-friendly.
Well, hanging a 1 Gbps home network on a 10 Gbps WAN is not the worst to happen, but I agree the user can not expect to see more than the ~950 Mbps you can get out of 1Gbps ethernet; one would hope though that in a faster 10/10 segment there would be less epochs of congestion than in a slower 2.4.1.2 segment, and the guaranteed fair share under load for a 32 user segment is still 8600/32 = 268.75 Mbps which is a much nicer worst-case scenario to live in than 2400/32 = 75 Mbps (which still is a decent rate).
I would guess cake is a bit heavy here, you will need something that can use multiple CPUs concurrently. But I have no first hand experience so can only guess, but I think @tohojo would know.
*) You need to measure on three different non-cobsecutive days that fall within a forthnight, on each day you need to make ten measurements and measurements 1-5 and 6-10 need to be at least separated by an interval of 5 minutes, measurement 5 and 6 by 3 hours. Then there are rules for interpreting the results. An ISPs performance is considered deficient if:
a) the user did not measure at least 90% of the contracted maximal rate on two of the three measurement days
b) the user did not reach/exceed the contracted usually reachable rate in at least 90% of all 30 measurements
c) the user fell short of the contracted minimal rate at least once in two of the three days
That is a bit convoluted and leads to relative long measurement campaigns that will scare away some affected users, but at least there is an official process to confirm contract compliance of ISPs.
Hahaha that's so typically German!
Reminds me of:
http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=CONSLEG:1994R2257:20060217:EN:PDF
The lowest-power x86_64 machine in my home here draws ~2.8W when idle. It's an ASUS VivoMini UN45 with 4GB of RAM, a dual core Celeron CPU that's (generally speaking) vastly faster than most options available in SoHo routers, and 512GB of NVMe mass storage.
Ths most energy-efficient Access Point I have in operation is the Belkin RT3200 with latest OpenWrt, which draws around 4.5 to 5W, iirc.
I was really curious about its power draw. 4.5 to 5W seems pretty decent, doesn't it?
Well, this is a national consequence of an EU regulation which contains:
"Article 4 (4)
4. Any significant discrepancy, continuous or regularly recurring, between the actual performance of the internet access service regarding speed or other quality of service parameters and the performance indicated by the provider of internet access services in accordance with points (a) to (d) of paragraph 1 shall, where the relevant facts are established by a monitoring mechanism certified by the national regulatory authority, be deemed to constitute non-conformity of performance for the purposes of triggering the remedies available to the consumer in accordance with national law."
The convoluted measurement and interpretation rules aim to allow to proof "continuous or regularly recurring" discrepancies between contracted and achieved performance in a way that will survive ISPs' attempts to litigate the problem away.
I would agree that the regulator erred on the side of being too cautious and having to have the end-user jump through too many hoops here, but at least the regulation exists and has "teeth" in that local telecommunication law (TKG) enshrines end-users rights to prematurely cancel their contracts or reduce their payments if the ISP does not deliver the contracted rates.*
Interestingly the Czech Republic has a similar approach but rather different conditions for the actual measurements.
*) As a "man of the law" you realize that it was always possible to sue one's ISP if the rates fell short of the contracted rates, but that was always quite costly and involved and there was no official interpretation of what service ISPs actually owe to the customer making the whole approach considerably more involved and hairy than even the very German measurement and interpretation rules
What I wonder about is actual average energy consumed over a day or even a week more than pure idle or fully loaded numbers. I will measure that soon for my turris omnia and my modem.
Maybe we should/could start another thread for collecting energy/power consumption numbers for different home network solutions?
Exactly instead of the iperf3 thread it's a look at my low power thread!
I just checked and my EAP615-Wall (MediaTek AX but MT7621 MIPS instead of MT7622 ARM) draws 4,8W (PoE). I do suppose the EAP615 having a single AX radio that serves both bands (DBDC) offers some efficiency gains, but they're probably marginal.
The Xiaomi Mi AIoT Router AX3600 (ipq8071a) uses around 6 watts.
@moeller0 you will find that the idle values are a quite good approximation for the averages even for busy home environments, load spikes happen, but they tend to be brief (there are no real differences between really idle and 'normal' background usage) and drown within idle times. Very active 24/7 p2p usage at high speeds might have a potential to skew the results, but not 'normal' usage (browsing, streaming, downloading, video-conferencing, remote desktop, etc. of an average family).
Edit: my max. values have not been reached while running OpenWrt, but doing a multi-threaded kernel build and multi-threaded ffmpeg/ x264 encoding concurrently (running desktop linux).
Edit2: Number of active ethernet ports and their speed (100BASE-T vs 1000BASE-T/ 2.5GBASE-T, 5GBASE-T, 10GBASE-T) and the intensity of WLAN usage will make more of a dent, than variations of (normal) WAN traffic.
Yeah I think that in Germany there is still a sense of trying to do things properly, whereas in Poundland doing things by half measures has become all too familiar.
Happy to see that, however my measurement device does not log immediate power usage, so I will have to compare a few manual readings with a longer term average. I am not disagreeing with your hypothesis, but I am curious enough to want to see more detailed data for my own router.
Occasionally, but there is enough "half-assery" going around here as well (and honestly not all problems need deep well engineered/thought-out solutions).
I'm not convinced shaping with cake at 10Gbps is a good plan. Even sloppy QoS prioritization should make latency a thing of the past.
Consider a 1500 byte packet takes 1500*8/10e9=1.2e-6
seconds so a buffer of 1000 packets drains in 1.2ms
If you use something like qfq and make 4 lanes, then send bulk, normal, video, and voice/game as separate lanes, with a 1000 packet FIFO in each lane you would still not see noticeable latency.
Not that a FIFO is a good idea, just that as you get faster, you can be a lot sloppier with no noticeable effect.
The real issue comes when you have a 10Gbps connection to a bottlenecked switch that is only offering you say 7Gbps. But again a simple token bucket filter can slow you down enough for that kind of situation.
The heavy cpu usage of cake is a good plan below about 1Gbps but above 2 or 5 it's very questionable tradeoff.
I was trying to be polite in order to end the exchange on a high note.
I'd wager that every single person on this forum has one or two fully functioning old systems lying around or at least the parts and know how required to build one.
Putting together a CPU, MB and RAM is literally a matter of 1,5 minutes. The "But I don't have the time" thing is what I usually say on leg day.
It's the same rationale for why you wouldn't buy an all-in-one PC from Medion, or a any Windows software lol.
We are tech enthusiasts and part of the open source community. A consumer grade solution is simply the anti-thesis to that.
But what about LAN host fairness?
Others disagree but admittedly not for the typical end -user leaf network, but more on the side of an ISP that needs/wants to shape individual end-user traffic on a 10Gb+ link.
Well, yes (today) and (no). 1 Gbps was at a time also considered ludicrously high so that:
a) bufferbloat would never materialize
b) this would stay forever like that
Alas times changed networks and devices got faster, and while many 1 Gbps links work fine without traffic shaping there are 1 Gbps internet access links that noticeably improve by adding competent AQM and traffic shaping.....
However the main goal here is not traffic shaping per se but competent AQM, the traffic shaper really only comes in if the true bottleneck is on the wrong end of the link and/or does not give sufficient back pressure.
But that token bucket (aka traffic shaper) is what carries most of the cost of cake/fq_codel... the actual FQ scheduler is surprisingly cheap and efficient.
Only matters if you manage to actually saturate the link for long enough duration to actually notice. I think that for 1 Gbps links that is already possible, say with torrenting, but I am not sure that on a 10 Gbps link that would be a noticeable issue today (in the future I expect it to become an issue).
You are using "we" as in "We are Borg." (IMHO and no pun intended).
Please mind that the users of this forum do not 100% consist of tech enthusiasts. There are enough examples of bloody noobs here that do not have a clue what they are doing.
Apart from that, the final answer to your question is: Because not everybody is like you.
Not everybody has your requirements, unused hardware resources, knowledge, time (to select, buy, assemble, troubleshoot), money, care about energy consumption, ... (for details see answers of other users above).
It's possible that the AES-NI feature is not enabled on OpenWRT with the vanilla installs for X86_64. I was able to push 500Mbps+ IPSec traffic using the AES-NI feature in OPNSense on an old Cisco ASA-5512x I don't remember if I tried OpenVPN or not.
Can be done in hardware on a switch.
The key is that interactive streams etc have a kind of bandwidth ceiling beyond which an individual stream has no real benefit to increase. Video streams of say 40Mbps would give you cinematic level 4k interactive video. Most people are pretty darn happy with Zoom at like 700kbps. If you take some big organization like say UCLA, and imagine a floor in a building where 500 students all want to do 3Mbps interactive video, it's only 1.5Gbps. If each floor has a 10Gbps uplink, and at most 1ms of queue size at 10Gbps, and you prioritize the interactive streams, they shouldn't experience more than 100us of delay at full saturation.
On the other hand, if you're trying to stuff that 1.5Gbps over 1Gbps uplink, it's not going to work...
The only way this really goes wrong is if people start trying to stuff dramatically more students into one building, and physics prevents that. Or if people do something stupid and try to offer cinematic 4k interactive video streams, which no-one really wants... skin blemishes become larger than life... cameras now need much physically larger sensors and don't fit in laptop lids, etc. Basically physics to the rescue in some sense.