Or many to many communication, but my argument is about what use-case was behind its standardization.
Did they? The TV people had/have a perfectly good system that works for them "broadcast" over shared radio frequencies, my question is what drove the development for multicast in the IETF.
This is how FEC works really, and by the amount of redundancy you can affect how much loss you can deal with, but that redundancy comes at the cost of capacity. Depending on the expected error distribution that can or can not be better than simple ACK/NACK schemes in which lost data gets re-requested ad re-transmitted, but that only works in bidirectional communication, digital TV (over satellite or terrestrial antennas) is a great example where ACK/NACK will not work well.
That might be true, but that is mostly solved by CDNs and P2P. Networks are typically built out of static links, and the low load times can be used to distribute expected popular content out to the edge nodes, from where on the backbone link is not loaded once users request that data during the high load times. And for my DSL link it matters not one iota whether XMbps are tied up due to the data coming in via uni- or multicast... So, I do think the internet found ways around the waste argument you are making here.
You make sue you do local P2P distribution of anything important...
It has been the solution of the past, as people operating big iron routers seem pretty allergic to stiff like competent AQM (which makes operating a link close to saturation actually bearable). I do not see that attitude changing and I also do not know how back bone routers arbitrate between uni and multicast flows when they become congested.
And no view on demand... really that sounds much better in theory than it would feel in reality. I actually subscribe to your goals here, but I do not believe that multicast brings us closer to them than unicast... Also what you describe is like the one use-case I already agreed multicast is useful, linear real-time TV (can be time shifted, the point is you can not arbitrarily move the playhead around but are forced to follow the program in real time; sure short breaks might be possible, by local buffering).
But you are making political points here (to which I agree) to justify a narrow technical solution (and I am not convinced that multicast has a big future, and is still at the whims of big centralized forces, the different ISPs/AS that actually need to route the packets). Again for real-time linear programming it might be a decent solution, but I see few of such problems in real life (maybe because I am living a sheltered life in my pampered-westerner bubble).
+1; same is true for the internet access providers (or mostly all companies, if they get too large they tend to become more of a problem than a solution). I am fine with trying to find ways to rebalance society again, and if multicast can help that, I am all for it (in spite of not really seeing the "light" yet). As always, thank you for the nice discussion and for tolerating my sometimes accidental impoliteness (not being a native speaker/writer I probably am not as polite as I intend).