This is one of the gaps we try to help address with CBACC. The idea is to give the ISP a way to know how much bandwidth each stream will take, so they can make an informed decision about whether to ingest and forward it and what to expect for its provisioning, and hopefully make the right decisions when coupled with the popularity within the ISP. But it's worth noting that it helps not only the stream being transported, but also the competing traffic because it reduces the network load. I think it's analogous to running public transport, a win for everyone by the traffic reduction it gives.
I do think there's some potential for network neutrality questions, but as long as it's not a content-based or provider-based decision and it operates transparently I don't see how it would fall afoul of net neutrality regulations. But with that said, I agree this is one point that worries me a bit if there is something I haven't caught here--sometimes if regulations are sloppy there can be unintended issues, and it's possible there is something here I don't know about, but as I understand it the European ISPs I spoke with did consider it as part of their due diligence, and seemed to think they expect the way we're proposing it (as a provider-agnostic standards-based ingest+forwarding decision based on improving overall network performance) to be ok, though this is all second-hand to me and I'm not sure I know all the considerations.
I should maybe note that the Init7 employee giving that presentation was the one reporting that he observed the high link-sharing setup in another (unnamed) provider's network at a shared colo of some sort, NOT the ones doing it themselves. In his presentation he claimed they didn't like to do business that way, and seemed to think it surprisingly and unreasonably high.
Of course your 100mbps internet subscription will never mean you get 100mbps for everything you try to fetch, as sharing of the internet path's forwarding resources among different users is expected behavior everywhere on the internet. But it's not so easy to pin down who is responsible when downloads or streaming is slow, and it does seem fair to say your ISP should have some kind of obligations here by taking your money. There's a big difference between a company that sells someone a 100mbps subscription and shares a 1gbps uplink among 20 people vs. one who does exactly the same thing but shares it among 1000 people. If they are selling these things at the same price, something is probably not right. Is the second abusing its customers? Is the first just running their business badly?
I guess it sounds like a noble effort to provide some regulatory relief against abuses here, but I'll start out as skeptical that it's going to work well, or that it will capture the over-sharing pressures effectively. The enforcement mechanism sounds kind of game-able, on both sides.
And on top of that, it will depend in an odd way on how much and how consistently your neighbors are using their internet. It's kind of odd if the answer to the question "is this a fair provisioning of the contracted service" changes if some app becomes popular that changes the usage pattern at scale (imagine for instance 60% of users start running something that does persistent off-peak downloading or p2p sharing of subscribed content, resulting in more overall usage). But something like that would impact the measurements you'd get. It sounds like a tricky problem, which is why I'd imagine it will have some trouble working well in practice. I guess what sounds better to me would be mandated transparency in your uplink sharing factors, so that as a consumer you have the ability to make an informed decision between available options if you care.
Interesting digression and interesting to hear about Germany's efforts here, thanks for pointing it out, I hadn't heard about it yet. I'll be interested to see how it goes.
Not really, but there are tradeoffs that make it a little complicated as to whether it's a good move, and it might be sometimes. It's pretty application-dependent, and different use cases with different protocols (and different operating profiles of the same protocols) also have different loss tolerance and different reactions to loss, so it's a little hard to generalize, but there are some considerations that can be articulated:
Unlike unicast, the sender would not typically back off in response to losses for most protocols, with some exceptions: NORM and RTP have some feedback channels here that might do so, but also are tunable from the sender side and need to avoid overreacting to individual receivers with bad connections. In most other protocols the receiver is supposed to react to bad enough losses by unsubscribing and using something else usually a lower-rate channel or an alternate unicast path (or in theory the gloriously broken approach that is described in RFC 3738), but a broken (or malicious) app might not do so, so you can't really rely on all subscribers being well-behaved either, and the network should therefore take steps to prevent having persistently oversubscribed links. So you might put a bandwidth cap on the amount of multicast that's lower than the link capacity, for instance.
But also, yes, for apps that repair loss in the multicast channel by using unicast, loss will result in a disproportionate impact that might be worth protecting against with reserved bandwidth (or even QoS prioritization of some sort) to improve the overall network performance, especially for the most popular streams. (The QoS observation actually made ISPs more worried about net neutrality in some cases--QoS prioritization seems like a touchy subject, with good cause I think. But although I do think it's probably a good approach for some cases, I also think it's optional, there are just consequences.)
Anyway, I don't think it's necessarily required to reserve bandwidth for multicast, but it might be a good idea for some cases, especially for sufficiently popular traffic.