--socket-stats only works on the up. Both the up and down look ok, however the distribution of tcp RTT looks a bit off. These lines should be pretty identical.
Three possible causes: 1) your overlarge burst parameter. At 35Mbit you shouldn't need more than 8k!! The last flow has started up late and doesn't quite get back into fairness with the others. 2) They aren't using a DRR++ scheduler, but DRR. 3) Unknown. I always leave a spot in there for the unknown and without a 35ms rtt path to compare this against I just go and ask you for more data
No need for more down tests at the moment.
A) try 8 and 16 streams on the up.
B) try a vastly reduced htb burst.
A packet capture of a simple 1 stream test also helps me on both up and down, if you are in a position to take one. I don't need a long one (use -l 20) and tcpdump -i the_interface -s 128 -w whatever.cap on either the server or client will suffice.
I am temporarily relieved, it's just that the drop scheduler in the paper was too agressive above 100mbit... and we haven't tested that on this hardware....
Anyway this more clearly shows two of the flows are "stuck":
I have a self built NAS, it has a 1Gb/s NIC in it. I ran speedtest (Ookla) on it just now:
Speedtest by Ookla
Server: Jonaz B.V. - Amersfoort (id = 10644)
ISP: KPN
Latency: 3.35 ms (0.11 ms jitter)
Download: 502.56 Mbps (data used: 238.9 MB)
Upload: 598.88 Mbps (data used: 1.1 GB)
Packet Loss: 0.0%
Result URL: https://www.speedtest.net/result/c/812e0f44-d77e-4a36-9a1e-35e19707c1c6
It's getting older now, built about 6 years or so with low energy consumption in mind but still fits the bill, quad core, 8GB RAM.
Would this be decent enough as a server? I'm running @ACwifidude 's NSS build (21.02) build on my R7800. On the other R7800 I'll be happy to sysupgrade to a build from @KONG with the settings that are used with 100Mbps so I can produce some flent output to see what happens above 100Mbps? Need to try to match the same circumstances off course.
I think as an architecture for people to use to make NEW hardware... it's pretty dead. The OSS world has a tendency to keep stuff working for decades after it's no longer actively being used for new mfg. Like Linux kernel for SPARC or whatever.
I'd like to see multicast be much more widely used across basically the entire internet. Since it seems unlikely across typical ISPs perhaps that's best implemented across the overlay networks I mentioned. So how about a multicast routing suite and some kind of multicast "visualization" to make it clearer what multicast is routing where?
Some example thoughts:
suppose you want to watch a movie with your friends? So you just start streaming it off your media server, via multicast ipv6 (perhaps in the ff08:: or ff0e:: scopes). Your friend then simply subscribes to this ipv6 multicast stream and suddenly within a few seconds, the wireguard tunnel between your houses is streaming live the same multicast stream in both homes. A third friend wants in on the party, so they fire up and subscribe... voila watch-party!
Suppose you want to play realtime games with your friends. You fire up your "multicast audio comms" software (along the lines of mumble) and by subscribing to some ipv6 global scope multicast stream with a randomly generated address (ff08:ab82:1104:3310:8890:fafa:f1e6:0001 for example), magically audio from all your devices is simultaneously routed across an overlay network to all your houses.
Let's stop letting ISPs, record companies, and big fat network conglomorates decide what we're allowed to do.
Copyright and patent defense/collaboration is another; or perhaps a better way to put it is âhow the next Internet can pay creators without throwing away all the advantages of worldwide instant communicationâ
Digital value transfer (reliable digital cash)
Reliable node-level cyber security
Reliable network-level cyber security
Reliable internetwork-level cyber security
Privacy on a large scale network.
How to leverage information asymmetries for ordinary usersâ benefit.
Distributed social networking.
Avoidance of pinch-points like Google and Facebook that bend a widely distributed system into an access network that somehow always leads to their monopoly.
What replaces the Web as the next big obvious thing that we shouldâve done years before which takes over the worldâs idea of âwhat the Internet isâ?
Creating better business models than (1) move bits as a commodity, and (2) force ads on people!
yeah ipfs feels like a good idea that isn't going anywhere.
what it should do is be the mechanism to eliminate Scientific Journals. Anyone who wants to publish some stuff, just throw it into the IPFS and bang, there it is... But it won't happen unless there are searchable indexes, and unless "journals as recommender systems" take off... neither of which will happen.
For too many of the worlds problems, The problem is People, not technology.
https://web.hypothes.is/ I like. Fixing the wuffie problem is not in scope for cerowrt II, but with the rise in serious amounts of storage along the edge, disributing data better does strike me as a good goal. What is so wrong with torrent, btw?
If I read it right I think it's rather cool in principle: the ability to publish to a uniform standard with support for hardening, integrity, high availability and anti-censorship as intrinsic properties of that standard, rather than having to select, engage, manage, purchase and hope for longevity among distinct offerings for each of those requirements. The act of publishing safely and reliably would have some separation from your technical ability, your means, your geopolitical situation, your access to knowledgeable support, etc.
Edit: on reading more thoroughly, it's all a bit too venture-capitalized / incubator and hothouse-driven a la Docker; not so much community led. So for all the emphasis on community and open-source in their language -- which is itself very marketing-agency in style and content -- it's still privately directed, with all that that does and doesn't entail. I'm more optimistic about RFCs than mission statements.
For $100 or so you can get an RPi4 with 4GB of RAM and an SD card with 32GB of storage. And yet OpenWrt is designed to run still on 8MB of flash and 64MB of RAM.
Now RPi4 is not really a router/network appliance, but the point is a router that can route a Gigabit, SQM a gigabit, wireguard a gigabit, and still have a couple CPUs left over to do smart stuff is kinda the baseline we should be targeting. Let's put Julia on a router and do Bayesian inference on the DSCP values each flow should have to maximize the expected satisfaction of the users based on a history of its transmission rates and such... (perhaps by compiling eBPF from Julia via https://github.com/jpsamaroo/BPFnative.jl and directly installing the result in the kernel, updated every 20 minutes) let's put a mesh database of named network resources on the internet so you can go to your browser and say "what new pictures or stories has my friend dtaht put up on his friends gallery?" and get 100% of the actual benefit of facebook without the intermediary and with complete separation of UI from publication... also using strong encryption using session keys published to your friends.
Let's auto-detect suspicious behavior by studying a sample of 10% of packets going through the router by forking them to user-space from nftables and running them through a neural network classifier into a Bayesian decision theory optimizer and have the router send Signal messages to your phone when it detects serious probes or that devices on your LAN have been infected with malware.
Have hot-spare fallback support built-in via keepalived / VRRP. When you buy a router, it's two routers in one box with two separate power supplies.
Is it worth splitting these items into a few different categories? Long-haul wireless, for example, requires a commercial hardware infrastructure solution. Thread support is going to require some level of home hardware. Network reporting tools are a UX/UI problem. etc.
Some of these are multi domain problems, but it seems there's usually a primary problem that will require the majority of effort for the first pass.
Would it help to break out improvements to some sort of tier structure based on audience or hardware requirements?
@dtaht: First and foremost, thank you so much for you work in eliminating Bufferbloat. CAKE SQM is the best. I have a few questions although they may be a bit off topic:
Does CAKE currently work with only a single CPU core in the current OpenWrt 21.02 and SNAPSHOT (kernel 5.4, kernel 5.10 and maybe kernel 5.15 in the future)? If yes, can it be made to use multiple CPU cores and thereby make it process more bandwidth?
I had a Belkin RT3200 (dual-core 1.35 GHz ARM v8 Mediatek CPU) that I was using with Comcast Xfinity 600/20 Mbps DOCSIS plan (20% overprovisioned, to 720/24 Mbps). CAKE + layer_cake.qos at 600,000/20,000 setting in the RT3200, I still only get around 400 to 450 Mbps download (wired), which I think is because of CAKE single-core limitation. With FQ_CODEL + simple.qos, I am getting close to the 600 Mbps download (wired).
I read that FQ_CODEL is the default qdisc in Linux and maybe *BSD, macOS, iOS etc. Do you know if it is also implemented in Windows 10, Windows 11 and Android?