CeroWrt II - would anyone care?

A stream only arrives to the network if it's desired, so this is no different from unicast, and it only arrives once so as soon as at least 2 people want it it's gravy. With windows updates it's HUGE gravy. Imagine there at 10B windows installs. A single raspberry pi could provide the stream for the entire planet. Obviously you'd probably want at least 5-10 but the alternative is 10B separate streams. Yes windows has some peer to peer thing but it's still just garbage compared to a proper multicast solution. Similarly for TP-Link switches or routers, or Cisco zero day patches or whatever.

Ipv4 had trouble with too little address space but we have 120 bits or so of identifier in IPv6, every person on earth could be multicasting a million things without major issues

yes, but these two need to:
a) request/be interested it at the same time
b) both desire the same bitrate
c) be error-free, for video/audio an occasional glitch might be acceptable, for software distribution not so much (there are designs for retransmissions over multicast, but IMHO these increase complexity considerably, reality often does that to conceptually nice and clean designs)

I agree that in theory it looks like a nice method, but in practice it seems far less so. IMHO most of what it promises to solve has been superseded by most content being served from relatively close by CDNs or by bit-torrent swarms, both of which do not suffer from the three points above. Yes, they might waste more bandwidth in aggregate, but that is not really that scarce...

In addition it has been tried, e.g. for IP-TV delivery which is like the best use case, fixed schedule, fixed bit-rate signal with traditionally thousands of viewers, but the local incumbent switched away from doing this and went back to a boring uni-cast delivery (which might be driven by on-demand being more important than the real-time streams; but might also been caused by a simple change of vendor of their IP-TV platform).

Yeah, I do not believe these are a good fit for multicast... possible, but why would I use e.g. a 1Mbps multicast, when I can get the same data at say link speed of 100Mbps from the nearest CDN?
IMHO multicast over the internet is a solution in search of a problem :wink:

Not everything that is technically possible is necessarily a good idea... :wink: and 64 bit interface ID really is wasting bits (that can be reclaimed by careful construction of interface IDs). I am not saying going for 128 bits in IPv6 was a bad idea, but I am not sure we did more than scratch the surface of what these additional 96 bits can be made to do.

So here is my IPv6 CeroWrtII proposal:
Make the top or bottom 6 interfaceID bits do double duty as end-to-end classificator bits, that e.g. can be used to transmit the intended DSCP to the end-point in a way intermediate hops will not dare to fiddle with (packet might be rputed to a wrong end point).

Nah, for software distribution you use waterfall fountain coding. A client just starts listening and after x amount of time is done regardless of where they started in the cycle. It's fine to send many of these as separate 1,10,100 Mbps streams, just subscribe to your available bandwidth tier.

Even if this were just a thing available as caches on site at universities and large businesses and inside data centers or at ISP local offices it could be extremely useful. Think of how many bytes of "windows update" get sent around the internet?

FEC will only recover errors up to a given magnitude, while eating up capacity...

Yes at that point you are adding a lot of complexity only to make multicast not such raw eggs, when faced with reality... ... a solution looking for a problem. You would ideally having different casts with different rates, different levels of FEC, different start times...

I could not care less, larger businesses/universities already set-up their own update servers (so not every update of a client hits the internet) and home users can use P2P mode... and these update servers could be seeded at night time when there is little traffic anyway.

I am still not seeing how multicast is substantially improving any of this, and can't shake the feeling it might have been designed to allow linear-TV style of services which fell out of favor by many users since the time multicast was designed, no?

Multicast is inherently about one sender N receivers. It has nothing inherently to do with TV or real time. Software updates are a perfect example, there's nothing inherently linear or time sensitive, it's just that the TV people were the first to realize they had this issue.

The RaptorQ codes let you broadcast in essence an infinite stream and whoever receives any N+2 blocks reconstructs the N block file. You can stop and restart and have any blocks missing or whatever but as soon as you've received N+2 different blocks you're done.

Now, you maybe don't care but bandwidth isn't infinite and investing money where it's not needed is wasteful. Perhaps we in the first world can afford that waste but how about West African or Fijian school children? What if your whole village of 5000 people relies on a single 100Mbps microwave link that cost as much as your whole villages annual income to set up?

I just think we miss out on a lot of possibilities when we decide that over provisioning and bandwidth waste is the solution to the future. Furthermore I think interactive video and audio will be a big part of the future and a solution where you don't need a centralized gatekeeper makes a lot of sense. If I want to run say JuliaCon with 25000 people globally watching a presentation on the latest compiler tricks, right now I have to do that through YouTube because they are the only people with anything like the available infrastructure. But if our network is designed to serve the needs of everyone instead of just the moneyed elite then when I join a multicast stream of the conference I just start receiving it, no big contract with a "video distribution provider" required. No massive 200M investment in Zoom the potentially compromised Chinese spy organization... Etc

I think it just takes a little more imagination to see what is possible and who benefits. Multicast across the internet is a democratizing force. Anyone with a cell phone can broadcast atrocities in Afghanistan without needing YouTube or whatever. Centralized big content providers are not a healthy way to grow the internet.

Or many to many communication, but my argument is about what use-case was behind its standardization.

Did they? The TV people had/have a perfectly good system that works for them "broadcast" over shared radio frequencies, my question is what drove the development for multicast in the IETF.

This is how FEC works really, and by the amount of redundancy you can affect how much loss you can deal with, but that redundancy comes at the cost of capacity. Depending on the expected error distribution that can or can not be better than simple ACK/NACK schemes in which lost data gets re-requested ad re-transmitted, but that only works in bidirectional communication, digital TV (over satellite or terrestrial antennas) is a great example where ACK/NACK will not work well.

That might be true, but that is mostly solved by CDNs and P2P. Networks are typically built out of static links, and the low load times can be used to distribute expected popular content out to the edge nodes, from where on the backbone link is not loaded once users request that data during the high load times. And for my DSL link it matters not one iota whether XMbps are tied up due to the data coming in via uni- or multicast... So, I do think the internet found ways around the waste argument you are making here.

You make sue you do local P2P distribution of anything important...

It has been the solution of the past, as people operating big iron routers seem pretty allergic to stiff like competent AQM (which makes operating a link close to saturation actually bearable). I do not see that attitude changing and I also do not know how back bone routers arbitrate between uni and multicast flows when they become congested.

And no view on demand... really that sounds much better in theory than it would feel in reality. I actually subscribe to your goals here, but I do not believe that multicast brings us closer to them than unicast... Also what you describe is like the one use-case I already agreed multicast is useful, linear real-time TV (can be time shifted, the point is you can not arbitrarily move the playhead around but are forced to follow the program in real time; sure short breaks might be possible, by local buffering).

But you are making political points here (to which I agree) to justify a narrow technical solution (and I am not convinced that multicast has a big future, and is still at the whims of big centralized forces, the different ISPs/AS that actually need to route the packets). Again for real-time linear programming it might be a decent solution, but I see few of such problems in real life (maybe because I am living a sheltered life in my pampered-westerner bubble).

+1; same is true for the internet access providers (or mostly all companies, if they get too large they tend to become more of a problem than a solution). I am fine with trying to find ways to rebalance society again, and if multicast can help that, I am all for it (in spite of not really seeing the "light" yet). As always, thank you for the nice discussion and for tolerating my sometimes accidental impoliteness (not being a native speaker/writer I probably am not as polite as I intend).

1 Like

The latest and greatest news regarding multicast is "bier": A ton of documentes here: https://datatracker.ietf.org/wg/bier/documents/

Try: https://datatracker.ietf.org/doc/draft-eckert-bier-cgm2-rbs/

OK, this is quite literally a moonshot. The DTN "Bundle' protocol. Also the "Minkowski routing system" looks fascinating.

Really good interview with vint over here: https://www.datacenterdynamics.com/en/analysis/vint-cerfs-interplanetary-ambitions/

Maybe we'll end up with a use for spacewrt.org after all.

https://projet.liris.cnrs.fr/riot/dtn_implementations_survey.html

There's a lot of chips that have an mcu core too. https://upcn.eu/

While trying to get a better understanding of the various mechanisms for QoS, I came across trTCM (two rate Three Color Marking). What I can't find is how that relates to something like Cake. It seems this type of shaping/rate control is available in a lot of enterprise grade switches/routers. That combined with 8 (4 maybe) QoS queues seems like a good solution to keep a downstream device from needing to buffer (which is how I understand bufferbloat: downstream devices needing to buffer to avoid having to drop a packet).

Where can I find more information on that and why Cake is a better (but more CPU intensive) way to do it?

Let's think about it as a kind of Meta Net Neutrality. Think of Multicast as not only a way to shout into the ether... but also a way to suck info out of the ether. When your machine says "give me whatever ff0e::1234:5678:9abc:deff has to say" do we allow supposedly neutral "carriers" to say "pshaw yeah right" or are they required to send out some probes to all the networks they're connected to saying "hey give me what ff0e::1234:5678:9abc:deff has to say.... and deliver whatever that is to you?

If we organize society in such a way that ISP carriers have to make a good faith effort to deliver what you ask for, then any citizen with a Raspberry Pi can broadcast info to the entire world. On the other hand, if we let them stick to just unicast, then only people with enough capital equipment to make 7 billion simultaneous connections and shove 10Mbps * 7 billion (70 Million Gigabits per second, or the bandwidth of 175000 400Gbps switches, so maybe ATT, Google, Microsoft, Facebook and a few others) down a wire can send high quality video of human rights violations (or maybe something good, like kids sporting events or a conference on Julia programming) to the entire planet.

There's a HUGE difference in equality of access when the network is required to be a neutral carrier and anyone with a RPi4 can broadcast to the world.

No worries. I appreciate pushback, it sharpens ideas.

Also note, if there are 7B people on the earth then the last 64 bits of IPv6 give about 2.6 B addresses to each person on the planet. Or another way of saying that is if you just decide to start multicast broadcasting on a randomly generated ff0e::.... there is negligible chance of a collision. Still it would probably be good to be able to request not only that given destination, but from a particular source.

Like the acronym a lot!

Going back to this. How about a hash based method to distribute download to 2 or 3 different IFB and upload to several veths stuck into a bridge?

I think the issue is that the traffic shaper really needs to have all traffic across an interface under its control, so if the shaper is running on different CPUs it still needs to coordinate to not exceed the global limit.... but I might be misunderstanding the core of your question.

My idea was for people with under-powered but say 2 or 4 core routers to set up say 2 or 3 interfaces and give each one a limit which is 1/2 or 1/3 of the true global limit. The true global limit is guaranteed not to be exceeded, but the aggregate amount might still be more than what is possible under a single CPU scenario.

1 Like

Yeah, that sucks raw eggs.... or put differently, I would be amazed if users are willing to make such a trade-off (unless each shaper would have a rate that allows to saturate the LAN interface...) So this would require a lot of work at "managing the users'" expectation such that they are sufficiently happy with the result.

Well, for whatever reason, the whole internet hasn't read my post on gigabit routers, and people still expect their $80 consumer all in one routers to SQM their new 500Mbps + connections but only get 250 or 300Mbps out of it.

Anyway, I agree that this hack is not the right solution, the right solution is to buy a wired router that handles well over a gigabit, and then move on. But some people are even crazily getting 10Gbps connections at $40/mo and then wondering why $80 all in ones can't handle those speeds...

2 Likes

Slackers! ... these kids today :wink:

+1; or if 1 Gbps is cheap enough the willingness to simply ignore a big part of the potential throughput for better latency under load. Like when I shapedmy 100/36 link down to 49/35 because that was the limit my router could run cake with... :wink: (in all fairness my ISP has since days past considerably improved his bufferbloat game (cyclic ramping download BB from 30 to 60ms), still cake does mostly better (pretty flat at 30ms)). The coming 10Gbps links are going to make everything harder... (actually only traffic shaping, I think scheduler and AQM are cheap enough already).

MQPRIO with actual hardware offload is now being tested (see: mvebu mqprio testing).

I realise that in order to keep bufferbloat in check we want "no buffers" at all, and having hardware do MQPRIO or HTB or... it needs to buffer at least a few packets to re-order. BUT any NIC has some kind of queue right now already and I didn't see anywhere (yet) to actually reduce this queue to only 1 or 2 vs the standard 1000.

Thanks @dtaht for pointing this thread out to me.

I've been working on multicast at Akamai (in IETF and W3C) and we're getting some traction, though there's still a way to go.

I gotta say I agree almost completely with @dlakelan about the way we should be looking at it, and will also say this is not as hopeless as people usually think when they first hear about it. I do not think this ship has sailed. This is demonstrably useful, both for live video and for software download, plus perhaps some other cases. The main thing it solves that CDNs and p2p cannot solve is access network congestion, particularly in cable and gpon networks (and tho I agree with @moeller0 on DSL it won't make as much difference, but it can still reduce load on the DSLAM's uplink, which sometimes matters with frequency depending how oversold they are).

For those big-download days it would make a big difference to people, and there's a bunch of ISPs I've spoken to with some interest in making it happen.

For CeroWrt in particular, it would be awesome if the mcproxy opkg were on by default and enabled for v4 and v6 global ssm addresses (232/8 and ff3e::/96), so it would pass along global joins from the lan into the wan. This would make it so people with their own router would have the capability where the ISP provides multicast access. Note this functionality is enabled and in use by several European ISPs for their devices, where they have TV services with a mobile app. I don't know what all devices have it baked in or not, but I checked an out-of-the-box Fritz!Box, and it was there and on by default.

(Also great would be if DHCP would forward in the DOMAIN-SEARCH option if it arrives from wan, as this would enable DNS-SD for ISP-provided services, which would be potentially useful for many things and specifically useful for mnat...)

For a little more color on the status, we've got some specs (with prototypes) at various stages of completeness to fill in the important gaps we've found along the way:
https://www.rfc-editor.org/rfc/rfc8777.html
https://datatracker.ietf.org/doc/html/draft-ietf-mboned-ambi
https://datatracker.ietf.org/doc/html/draft-ietf-mboned-cbacc
https://datatracker.ietf.org/doc/html/draft-ietf-mboned-mnat
https://datatracker.ietf.org/doc/html/draft-krose-multicast-security

I don't want to hijack the thread completely, but I'll encourage anyone interested in global multicast to join the W3C community group (requires a free w3c account and agreeing to the w3c's cla), we meet on 1st Wednesdays of the month: