A confusion of market forces vs. public good is at the heart of pretty much all of the world's disparities. Amazon has paid a lot of money for IP blocks as a capital investment, and nobody told them "you can't do that because it's a public good". To turn around and say "it's a public good again and you didn't really buy them, now rent them from us" is not going to go over well.
I'm firmly in the "public good" camp. I don't think they should have been able to buy them in the first place. But we're there now and maybe we can dial it back with the IPv6 address space, but those who were allowed to trade them will have a case to bring, and they have a lot of clout.
No doubt you are right, big rich companies are definitely encumbents who want to keep a strong expensive market for ipv4. We shouldn't do mechanism design as offhand comments in discourse forums, the main thing I'm trying to communicate is that the problem is political and economic and game theoretic, not a technology problem.
Feature: Integral battery backup. With the vast reduction in lithium battery costs in the last decade, it's possible to keep a device running for at least longer than the typical power flicker. This is a subtle improvement driven by the cellphone world that already has battery backup on the head-ends, thus phones tend to stay up longer than homes. Presently.
I've done this for a Raspberry Pi using a USB phone booster battery: as long as it's of the spec that can simultaneously charge and discharge (cheaper ones don't) then it's a nice little UPS that can see a consumer embedded system with a one-digit watt draw through a very long outage.
I think cheap but powerful enough, with a believable long-termish availability would be great. Or just screw it and go x86_64 with proper PCIe sockets (abandoning cheap, while keeping the other two)... it really depends what kind of users you are wanting to attract.
An end result that I really wanted was for ISPs to ship better gear, with a commonly maintained, constantly updated, quality OS like OpenWrt.
That was and remains kind of in conflict with some of the goals I've outlined in CeroWrt II so far - I vastly prefer "research" with no pesky end-users to deal with!
But other goals - like for example, "one click ripe ipv6 spec compliance" do fit. If there was a true reference standard for a quality device, much like how the apple airports were viewed as such - then we could raise the bar for everyone.
Part of that, to me, is that proprietary offloads have got to go, the hardware needs to be designed and supported for a 10 year service lifetime, and so on.
Well, that essentially removes the price cap then, a few enthusiast will be willing to spend some money to play with cerowrt II, but even the then the cheaper the easier to participate.
I have a real question though, are these standards actually worth complying to? I see a few things maturing now in the IETF's tsvwg that will end up being official standards in spite of being "utter shite"... Are you sure that is different in IPv6-land? Then again, the best way to figure out whether the standards are worth their salt is to implement them and see how the resulting system behaves.
Yepp, getting enough sufficiently powerful CPUs in a router so they can handle what is thrown at them would be nice, that also limits the choice of CPUs considerably, mid to high end ARM/Intel/AMD with sufficient system and memory bandwidth. (Assuming this aims at at least routing at 1 Gbps rates... I like Jesper's approach from a few years back, when he ran 10Gbps ethernet with minimal sizes frames to really stress the device (and his logic, since most packet mixes are biased to much larger packets, testing with minimal packet size allowed testing routing capability far beyond the 10Gbps link he tested on); but I think he needed a multi socket multi CPU xeon system to pull this off).
Yes for a router that would be nice (side note, my refurbished wndr3700v2 bought initially at Fry's in Burbank (styled as if an UFO had crash landed in the store), to test cerowrt I is still up and running, "demoted" to AP duty but still operational), OR something where more modern replacements that are fully compatible can be expected (again pointing to x86). Something like the APU line only with a few beefy cores instead?
internal UPS by default, probably not as useful (globally-) as one might think at first.
blown fuse, just push it in, let the router reboot, done.
the service interruption might not be nice, but at least your modem/ ONT/ switches and desktops will be affected the same, so no real need for the router to be more resilient than the rest of your network (it just needs to be able to reboot properly).
the whole street being powered off happens very (very) rarely where I'm located, even if it did happen often enough to consider, I'd then have to maintain power for more than just the router as well (see the blown fuse above). I realize that this situation might be severely different in other regions of the world, but there you do have to think about UPS/ independent generators anyways - and adding the router to the 'safe' circuit wouldn't be much of a burden.
some kind of emergency hitting the whole town, be it flooding, snow/ ice, storms, etc.
Let's face it, even if your own house has a perfect backup, your ISP doesn't. Analogue phone lines -where they still exist- might have been built with battery backups, but have they been serviced within the last couple of decades - modern ones (VoIP/ SIP, cellular) don't even try and will go down quickly (30-60 minutes at best, if your cellular base station and its fibre backhaul routers are equipped with battery backup to begin with) in this case anyways (yes, I've been through a week of that, 2g cellular connectivity was gone within half an hour).
having a battery constantly on charge increases the risk of it catching fire, or for the cells to bulge and die over time - these things need regularly service.
on a small scale, especially if hardware design and maximizing profits isn't top priority, x86_64 is hard to beat and viable long beyond the half time of many of the past high-end mips/ ARM SOCs (case in point, the almost 10 year old ivy-bridge c1037u can do routing/ NAT and sqm/ cake at 1 GBit/s line speed easily, around 53% CPU usage on one core - without even hitting its maximum clockspeed).
The scope of various sub-projects here is vast. If there is anyone out there that wants to take some of it on, or propose their own thing, both nlnet and the comcast innovation fund have been good sources for funding for me, and applying is very straightforward.
I am now on a small nlnet grant to deliver some bits described here: https://nlnet.nl/project/CeroWRT-II/ and some of the work is gated on the other wonderful projects now ongoing. My hope is to be able to stay engaged in the openwrt universe for this entire release cycle as well as continue my advocacy work elsewhere.
Suggestions for other funding sources welcomed.
I'm also heavily involved in attempting to shift US policy and broadband funding towards lower latency, ipv6 enabled networks, most recently with the publication of this bitag report on network latency.
which I hope gains policymaker readership worldwide.
Back to the original topic, some random things I would like to see:
IPv4 is here for the indefinite future, yes, new services from bad-at-technology organizations like Discord are still launching IPv4-only, but that doesn’t mean end-user devices have to be IPv4. MacOS has built-in NAT64. It shouldn’t be so hard on OpenWRT.
Subnetting, maybe also with mDNS-related adapters
Trash-IoT devices are often IPv4-only, but they don’t have to be on the same subnet as the end-user devices. The problem that comes to mind right away is when those trash devices communicate directly with the end-user device. HomeKit and IPP Everywhere don’t work easily when the trash device is on a separate subnet.
Multi-Gb routing and traffic shaping.
Though, at multi-Gb, I think a lot of the bottlenecks are transient and elsewhere in the network.
active and honest benchmarking against the best commercial products would be good, especially on our metrics - like latency to multiple stations, under load ( https://www.cs.kau.se/tohojo/airtime-fairness/ )- uptime, crashability, and so on.
My ISP (Centurylink) can't seem to be bothered to put in anything faster than shitty old copper lines in my rural area. I'm not in a poor country either, I am one of the estimated 40+ million people in the US who do not have access to broadband internet (which considered to be 25 mbps down, 3 mbps up by the US gov).
"FCC’s latest report (released in January 2021) estimates across the U.S. 14.5 million people don’t have broadband -- and experts believe the actual number could be twice that or more. A BroadbandNow study released in February 2020 estimates that as many as 42 million Americans do not have the ability to purchase broadband internet. While the infrastructure bill dedicates funding for broadband, there’s so much more to do." - https://www.americanconnectionproject.com/
The digital divide is not talked about enough, and it does not just affect third world countries or "poor" people. Often it's spoiled brats who live in big cities have no idea what it's like in rural parts of the US (but yet they'll tell us to just "buy better internet" as if it exists... but it doesn't because ISPs just sit on their ass while collecting our taxes... or they'll just tell us to move but that's far from ideal either)... that is, until they travel out here and experience it for themselves. And you can bet some of those spoiled brats work on websites and fill them to the brim with intrusive advertising and trackers, adding to the pain of slow internet. The modern web is a bloated place.
BTW, without rural communities... huge industries like agriculture would take a hit. So if a bunch of people moves away from rural communities just so they can get faster internet... well you can say hi to higher prices on just about everything.
And it only gets worse in online gaming communities when toxic, spoiled city brats talk shit on people with high ping, ignoring the fact it's not even our fault (in other words, we're not manipulating our ping or cheating)... but it's our shitty ISPs that we basically have little to no power over. A lot of them even think that having a high ping gives us an advantage, but actually it usually doesn't (it just makes our in-game actions arrive late, which of course can hurt the number of kills we get, etc.). Some games will also kick you for high ping (even if it's just a fluctuation for a few seconds) and may even ban you, even if you had spent money on purchase the game in the first place.
I just found cerowrt's old credits file, which was on the router, but not on the net. It's http, not https. But we were so amazing, in those 3+ years, and sigh, I'm so much older now. But my humble thanks to so many (that I could identify) that had helped, is here: