CeroWrt II - would anyone care?

My understanding is yes. Essentially there is a shared psuedorandom number generator. A packet has a little overhead in giving the seed, and then encoded blocks are some kind of xor of selected blocks from the source file plus optimizations for making things efficient.

If you get enough blocks you can reconstruct the whole file using math. But it doesn't matter which blocks, just enough of them.

Redundancy comes from the fact that you're sending an infinite bitstream to transmit a finite set of bytes. But you're doing that because you want anyone who starts and stops listening at any time to be able to reconstruct the whole thing.

Well, my kids play on Minecraft servers where 150000 people are online at any given time, of those they know... Probably 10 but they just play with whoever is in game now. You might argue this is data center stuff but a factor of 150000 reduction in bandwidth is interesting to data centers right?

Agreed that fountain codes solve a different problem. But I don't see how sending UDP multicast would be less resilient than sending 150000 UDP unicast?

Well I think carrot and stick would help. Carrot you get to offer vastly more services at vastly reduced traffic volumes. Stick is legislation requiring cooperation wrt multicast.

1 Like

If I only send one multicast stream, losses close to the root of the multicast tree will be much more devastating than if an occasional packet of an individual unicast stream is lost, no?

It's an interesting tradeoff I suppose. In some sense we are talking about enabling a whole host of things that just can't be done now. So in that sense we have 100% packet loss on those applications at the moment. Moving to a 1% packet loss would be a dramatic improvement :wink:

1 Like

Yeah, in a sense for sure, but "playability" might suffer a bit... I would guess that big sports-ball kind of distribution events would require to somehow get bandwidth reservations from at least the lower branches of the tree otherwise it would be a bit risky.... Anyway, I am neither seeing myself distributing nation-wide sports event or news, nor running multi-dozen player real-time games, so I can relaxed sit on the side-lines and watch how this grows hoping for maybe a littles less congestion peering/transition points between ASs (something positive even for boring old unicast users :slight_smile: ).

This is one of the gaps we try to help address with CBACC. The idea is to give the ISP a way to know how much bandwidth each stream will take, so they can make an informed decision about whether to ingest and forward it and what to expect for its provisioning, and hopefully make the right decisions when coupled with the popularity within the ISP. But it's worth noting that it helps not only the stream being transported, but also the competing traffic because it reduces the network load. I think it's analogous to running public transport, a win for everyone by the traffic reduction it gives.

I do think there's some potential for network neutrality questions, but as long as it's not a content-based or provider-based decision and it operates transparently I don't see how it would fall afoul of net neutrality regulations. But with that said, I agree this is one point that worries me a bit if there is something I haven't caught here--sometimes if regulations are sloppy there can be unintended issues, and it's possible there is something here I don't know about, but as I understand it the European ISPs I spoke with did consider it as part of their due diligence, and seemed to think they expect the way we're proposing it (as a provider-agnostic standards-based ingest+forwarding decision based on improving overall network performance) to be ok, though this is all second-hand to me and I'm not sure I know all the considerations.

I should maybe note that the Init7 employee giving that presentation was the one reporting that he observed the high link-sharing setup in another (unnamed) provider's network at a shared colo of some sort, NOT the ones doing it themselves. In his presentation he claimed they didn't like to do business that way, and seemed to think it surprisingly and unreasonably high.

Of course your 100mbps internet subscription will never mean you get 100mbps for everything you try to fetch, as sharing of the internet path's forwarding resources among different users is expected behavior everywhere on the internet. But it's not so easy to pin down who is responsible when downloads or streaming is slow, and it does seem fair to say your ISP should have some kind of obligations here by taking your money. There's a big difference between a company that sells someone a 100mbps subscription and shares a 1gbps uplink among 20 people vs. one who does exactly the same thing but shares it among 1000 people. If they are selling these things at the same price, something is probably not right. Is the second abusing its customers? Is the first just running their business badly?

I guess it sounds like a noble effort to provide some regulatory relief against abuses here, but I'll start out as skeptical that it's going to work well, or that it will capture the over-sharing pressures effectively. The enforcement mechanism sounds kind of game-able, on both sides.

And on top of that, it will depend in an odd way on how much and how consistently your neighbors are using their internet. It's kind of odd if the answer to the question "is this a fair provisioning of the contracted service" changes if some app becomes popular that changes the usage pattern at scale (imagine for instance 60% of users start running something that does persistent off-peak downloading or p2p sharing of subscribed content, resulting in more overall usage). But something like that would impact the measurements you'd get. It sounds like a tricky problem, which is why I'd imagine it will have some trouble working well in practice. I guess what sounds better to me would be mandated transparency in your uplink sharing factors, so that as a consumer you have the ability to make an informed decision between available options if you care.

Interesting digression and interesting to hear about Germany's efforts here, thanks for pointing it out, I hadn't heard about it yet. I'll be interested to see how it goes.

Not really, but there are tradeoffs that make it a little complicated as to whether it's a good move, and it might be sometimes. It's pretty application-dependent, and different use cases with different protocols (and different operating profiles of the same protocols) also have different loss tolerance and different reactions to loss, so it's a little hard to generalize, but there are some considerations that can be articulated:

Unlike unicast, the sender would not typically back off in response to losses for most protocols, with some exceptions: NORM and RTP have some feedback channels here that might do so, but also are tunable from the sender side and need to avoid overreacting to individual receivers with bad connections. In most other protocols the receiver is supposed to react to bad enough losses by unsubscribing and using something else usually a lower-rate channel or an alternate unicast path (or in theory the gloriously broken approach that is described in RFC 3738), but a broken (or malicious) app might not do so, so you can't really rely on all subscribers being well-behaved either, and the network should therefore take steps to prevent having persistently oversubscribed links. So you might put a bandwidth cap on the amount of multicast that's lower than the link capacity, for instance.

But also, yes, for apps that repair loss in the multicast channel by using unicast, loss will result in a disproportionate impact that might be worth protecting against with reserved bandwidth (or even QoS prioritization of some sort) to improve the overall network performance, especially for the most popular streams. (The QoS observation actually made ISPs more worried about net neutrality in some cases--QoS prioritization seems like a touchy subject, with good cause I think. But although I do think it's probably a good approach for some cases, I also think it's optional, there are just consequences.)

Anyway, I don't think it's necessarily required to reserve bandwidth for multicast, but it might be a good idea for some cases, especially for sufficiently popular traffic.

1 Like

I don't feel like wading through these specs today, or even this month, but I hope someone does.

It's pretty close with Raptor. IIRC you can make blocks of up to about 8k symbols, and there's not really limits I know of on the symbol size except being a multiple of 64 bytes and (if you're building multicast this way) fitting inside a UDP packet. So if you pick symbol size at 1280 you can encode up to a 10MB block, and the limit of repair symbols you can generate is something like 56k symbols IIRC. If you make your symbols (and block size) smaller, you can put more symbols into a single packet, which can reduce how much extra you need.

If you're missing some source data, the number of repair symbols you need to rebuild your block is probabilistic, I think it starts with needing at least 2 extra symbols, so if your source was 7000 symbols, you need at least 7002 total to attempt a decode with something like 98% chance of success, and it passes 99.9% at I think 5 extra symbols. You can decide as sender how much redundancy you want to provide, so if you're anticipating a network loss of up to 1%, you can run it with 2% redundancy on your repair and have plenty of extra. You can probably get away with 1.01% redundancy or so from sender to cover a steady 1% loss, but how much you provide just depends how tight you want to run it.

1 Like

From the wikipedia article: " For example, with the latest generation of Raptor codes, the RaptorQ codes, the chance of decoding failure when k encoding symbols have been received is less than 1%, and the chance of decoding failure when k+2 encoding symbols have been received is less than one in a million."

so basically you always decode it with k+2 and you really really always decode it with k+3

Any more Blue Sky?

A network browser gui. Identify machines and the services they are offering. Click on machines and set QoS preferences for various purposes.

A GUI queue hierarchy constructor. Let people set up HFSC with 4 classes and qfq below it and fq_codel below that etc. And set up nftables rules and tx filters to classify things with GUI point and click

A high end wizard that sets up recommended network segmentations, main LAN, business LAN, DMZ, iot, kids subnet etc

https://reproducible-builds.org/reports/2021-11/ I like diffoscope. I am having great difficulty trusting anything anymore.

@dlakelan - can I get you to think outside your box and about what your grandmother, or your local coffee shop, or a small business, would like?

A coffee shop or small business wants to boot up the thing and enter http://myrouter.lan which is written on the box, then it has 4 buttons to push, one that says "I'm a coffee shop offering public access" one that says "I'm a small business with no public access" one that says "I'm a small business with remote workers" and one that says "I'm a network professional take me to the full version"

After clicking the appropriate button they are asked a few minimal questions like "what's the name of your business?" And "what do you want to call your public wifi" a screen should appear that shows them randomly generated passwords and network config information that they can snapshot with their cell phone... And then they never need to look at it again and can put it on a shelf underneath the espresso machine near the bags of coffee beans

(Not that this is really possible, just that it's what they want)

Consider that these shops often employ people that don't know the difference between an IP address, a domain name, a URL, an ISP, WiFi vs "the internet". It's extremely challenging to help these people in any way other than just setting things up and making opinionated decisions on their behalf. I'm not in any way belittling these people, they're just not knowledgeable in the way that I don't know a thing about say getting good coloration while tie dying tshirts or weaving native style reed baskets

There's a standard for a QR code for phones and tablets to acquire WiFi connection details. If this could be popped up on a screen (phone, PC), with some means to print it, would that help the small business owner?

I am happy openwrt has support for 240/4 and 0/8 now. I am not sure if the gui lets you use 0/8 at this time, and certainly figuring out the uses for these new ranges is lagging in the ietf.

Also:

A month or two back, I took enormous flack for also helping propose we reduce 127/8 to 127/16, and not enough folk took a look at what I regard as a more valid proposal, which is making the "Zeroth" or "lowest" address generally usable against various cidr netmasks and finally retiring BSD 4.2 backward compatability. I think this latter feature is rather desirable especially for those with a /30 in giving you 3 (rather than 2) usable ipv4 addresses.

In none of these cases did we actually propose use cases for them, merely proposing they move from "reserved" to "unicast" status. I would hope the use case for "lowest" would be obvious (failover redundancy, monitoring tools).

Now, donning my fire retardant suit, there's the possibilities of opening up 127 for "other stuff". https://datatracker.ietf.org/doc/draft-schoen-intarea-unicast-127/ is the first ietf draft in this area, and again, all it suggests is we reallocate these for unicast outside of localhost.

To try and forstall the flack somewhat, I had several use cases for using stuff above 127/16 for something... but please read the draft? One was for use by vms and containers on a single machine, extending the notion of "localhost" to mean "stuff on the host". IPv4 is still, in general, cheaper than ipv6, offloads do not help for local services on the router, and the everlasting maze of rfc1918 style ipv4 addresses in kubernetes has to be debugged to be believed. (multiple firewalls, multiple layers of nat)

Longer term I kind of think that the notion of a wider localhost than ::1 for ipv6 makes sense also in this context.

In any case, having a good place to see what breaks if we fiddle with these archiac allocations is something that a cerowrt was good for..... and I really do miss the relative simplicity of firewalling that was in cerowall, a lot.

I really don't understand those efforts, 127.0.0.0/8 in particular (the others to lesser degrees), the time it would take (even if accepted, which I can't imagine) to phase out (update) the critical infrastructure to accept these IP ranges (and especially in terms of 127.0.0.0/8 pretty much every device in existence has that range hardcoded) will be measured in decades. In this regard it would be indeed more likely to go IPv6-only within that time (the whole stack, internet backbone to every CPE, desktop, phone, IoT device will have to be replaced anyways, no reason not to go IPv6 at the same time), than to risk using 127.123.245.0/24 for anything.

--
Totally ignoring the security aspects, actual usage of these ranges in deployed software, and the expectable misrouted traffic hitting those addresses for even longer.
Yes, reserving 127.0.0.0/8 for lopback purposes is a waste, but that ship had sailed in the early eighties, late eighties at the latest.

1 Like

Seriously we really really need a strong economic incentive to push for Ipv6 everywhere. Infrastructure bill that required all ISPs to pay back whatever they got from the govt if they don't offer native /56 to everyone and /48 to anyone who sends a request, including all business contracts by Jan 1 2023 for example. And a $200/mo/address tax on the use of globally routable IPv4 addresses by businesses after Jan 1 2023... In the US we have ~ 50% of consumer traffic going ipv6 already, just tip it the rest of the way with carrot and stick.

Seriously, it's easier and cheaper to use ipv6 only networks than it is to keep ipv4 networks going except when there's some ancient piece of software that no one has updated in years but runs your billing department or whatever.

So much laziness in the networking world around not learning anything about ipv6

It would be useful to have an OpenWrt build that has no Ipv4 configured on LANs by default, and runs Tayga and DNS64 on the router so people could plug it in and get a testing network without much work.

1 Like

Yeah, people should pick their battles. Even the massive 240/4 (aka "class E") would probably be easier to implement, and would only have bought 18 months of growth back when it was first proposed. And as it turns out, delay benefits no one: it just makes it easier for people to believe they have that much longer not to do anything.

(Aside: in my role as lead SW developer for a Fortune 100 tech giant I will leave nameless, I fought this battle many times: the insistence of QC and dev analysts to put in a requirement that the application would not permit the use of "reserved" IP addresses. I spent entire meetings convincing them that it is not our job to police network allocations and that it was the customer's job to know their IP addresses, not our job to tell them. I generally won these arguments but it was ridiculously hard work, wasted lots of time and made it very clear to me why thousands of other companies did the wrong thing.)

1 Like