CeroWrt II - would anyone care?

Fun thing, my voip base station insist on IPv4 only, but even if it would do IPv4 my ISP requires me to resolve its SIP servers via its IPv4 DNS servers....

Maybe it is time to switch SIP providers, but that is hard to justify, giving that flat-rate telephony comes as part of my internet package (and can not be canceled).

I think this is backwards facing, with 64 bits of host ID once people wrap their head around it, noone will want DHCP except possibly for servers, and even then tokenized addresses manually assigned is likely easier. There's just no need for centralized assignment, whose main purpose was always to avoid collisions and conserve addresses.

For devices that provide services and need a well known IP address you just assign them abcd:ffff::1 and count up from there (there are 4Billion of those addresses, and noone would be stupid enough to put 4B devices on one subnet) or use stable privacy and ddns which is the clever solution.

DHCPv6 isn't really a thing. It's a completely useless tech. The DNS is embedded in the RAs and android handles those just fine. In my experience Google made absolutely the right decision to stand firm on no DHCPv6. If they hadn't the newbs would have assumed DHCP was the one true path and perpetuated a disaster.

No because you can open wide for a given gaming device, there is no NAT so no dynamically mapped ports. Every Xbox can have port xyz opened for it separately. More to the point, you just use symmetric signalling, and then the port opens automatically as soon as the Xbox sends a packet out.

That is overly optimistic, IPv6 roll-out is not helped by the approach of trying to make everybody learn new ways to essentially keep doing the same... DHCPv6 by the way allows assignments of "randomish" addresses as well, the point is centralized versus left to the devices. Say you want to actually encode information into the addresses (64 bit is a lot of bits to play with) having control over it is somewhat nice.

I do not believe that these are the only dimensions driving central address assignments.

Is it? People folded stuff the in IPv4 was typically done by DHCP into RAs so the need for the functionality is still there, and I question wether RAs are so much better than DHCP.... to force people who understand DHCP to switch, as I said, forces to learn new ways to keep doing old things, or worse tries to force to do things differently (without necessity).

I have no opinion on the comparative merits of RA versus DHCPv6, but I consider it arrogant to declare to those who have been using DHCP in the past and have skill set and expertise, that they need to learn to do the same thing the RA way, sorry, that is simply bad style.

But that is trusting the now exposed devices to do the right thing on all ports. That is possible, but not mich more secure than running UPnP (except it is see this vulnerability in UPnP itself).
This is somewhat relevant, because consoles with perfectly playable network games fall out of software maintenance and keeping a device without even the fiction of continuous security updates fully exposed to the internet does not appear to be the best strategy...

That is not necessarily somthing you can control though.

When it comes to games you don't open the entire device just the well known game ports. PS4 has ports dedicated to games, Xbox probably the same. But as I said the game publishers just need to decide what port they use and then as soon as the device makes an outbound request on that port the reply traffic is let through. Game publishers are perfectly capable of this. It didn't work for ipv4 because there was no 1:1 mapping. If you put 37 PS4 devices behind one NAT at a party they can't all have port 3894 or whatever so you need a special crap mechanism for punching firewall holes beyond what just automatically happens with stateful firewall.

DHCPv6 really isn't a thing. It's ok for people to learn that. You really really don't want to centralize IPv6 assignment. Devices should be assumed to have at least 8 or so IPv6 addresses each. It's fine for a device to simply generate an IPv6 specifically to serve ONE APPLICATION. IP addresses were never device specific they are always simply an endpoint for communications. There's no reason why for example an Xbox couldn't generate one for the game client and a separate one for hosting a game server, maybe a separate one for voip endpoint, maybe a separate one for joining lobbies that only exists for the duration of one game (perhaps a useful DDoS prevention method). DHCP would prevent that kind of thing and leave us stuck in the past unnecessarily. Think about it this way, it's the right of the endpoint to have however many IP addresses are needed to perform it's tasks and the network admin can't know what the task requires.

But that is quite similar between IPv6 and IPv4, the question whether to do this manually or via UPnP is different from v4 versus v6, IMHO.

That is not how I think about this, the end devices will need to use those ports used by the respective servers... I guess it was/is the need for NAT traversal that made games consolidate on a few ports in the ephemeral range....

That only works for two way channels initiated from the inside, it does not allow initiation from the outside, which for peer to peer games results in a chicken and egg problem...

I think RFC8415 thinks differently :wink:

Well, I mildly want to, but the bigger discussion is about exactly that attitude of IPv6 evangelists that IMHO still hinders IPv6 roll-out. Cater to the needs of the users, do not try to force feed the new and shiny stuff just because it is new and shiny...

That IMHO is a policy question that is very much open. If say you want to use cake for per-host fairness, IP address stability is quite helpful (cake just needs something to identify a host with that is stable for a few seconds, after that if it is 1:1 exchanged with a new identifier things will still work). Privacy extensions mostly work, because they are cycled through but one/multiple IPs per application will not. Which can well be acceptable, but certainly falls under policy in my opinion.

Except IPv6 calls this the interface ID so initially persistence per interface was a goal (which fits the idea of converting MAC addresses into interface IDs). But yeah 64bit is a lot to play around with... and encode interesting/relevant information.

Yes there is, policy, in some networks that is okay and dandy, in others it is not :wink:

Again, I respectfully disagree, those that want to use DHCP because it fits their policy model/ideal will do so using whatever option IPv6 offers to do so.

But that is where policy comes in. If the admin is fine with that, she.he can allow that within or without the framework of IPv6, if the admin disagrees who are we to force this upon them?

As an example, my employer extensively uses DHCPv4 and has completely shunned IPv6 because it does not fit well enough into their policy and tooling (and to be honest their manpower-budget, local IT is great but a very small team).

No, they need the ports open to be servers. That is, to accept inbound traffic that doesn't first have outbound traffic associated with it. It's the non 1:1 nature of ipv4 behind NAT that makes this problematic.

Not really, for devices to discover that they want to play together typically requires some "matchmaking" as soon as the matches are made they are made between real actual ENDPOINTS not NATted devices. As soon as you know you want to play with devices A,B,C,D you send them each a UDP packet and they send you a UDP packet, you do this repeatedly for like 3 or 4 packets and you've got a connection. The reason you can't do this with ipv4 and NAT is that it's the router that receives the packet, it knows nothing about the fact that you wanted that packet to go to the xbox behind it.

The correct enforcement mechanism for this is at the MAC level. Otherwise you don't really know whether an ipv6 is 16 on one device or 16 separate devices.

Also, in the future an xbox or an Android phone or a desktop will just routinely spin up 118 "containers" each of which will want at least one ipv6. It's not viable to have 1 IPv6 per physical box. The sooner people drop that notion the sooner we will make progress, the horse left that barn years ago when Linux created containers (some time around 2010?)

Think of it like this, there is a network within every device and the policy of the owner of the network outside the device doesn't get to extend into the device itself. I consider this kind of a human rights thing. you own the device you decide what it does inside it. If a network provider is willing to let you connect your device to their network at all, they should allow you as many ipv6 host addresses as you like to facilitate the internal structure of your "thing" (for example it shouldn't be up to the DHCPv6 administrator whether an android phone spins up a separate ip to serve a peer to peer communications protocol, different from the one it uses to surf the web, and different from the one it uses to connect to a VPN different from the one it uses to share files to the local LAN etc.) There are legit needs to separate services within a device.

Yes, a network administrator can always just say "hey you can't connect to my network" but as soon as they do, IMHO the last 64 bits are for the hosts.

I think this is the clear reason that Google took their stand on DHCPv6 they could see that it was not viable for an android phone to be bound to the network provider's rules about how many addresses they could have. the assumption that each device only needs one is a false assumption coming from the fact that ipv4 was so limited in addresses. The point of ipv6 was to free everyone from address scarcity. Allowing DHCPv6 to perpetuate address scarcity will MAJORLY hold back legitimate innovation that is essentially already here, and is particularly relevant to mobile devices such as phones that might participate in 30 or 40 networks throughout the day.

FWIW... I get 10's and 100's on the "inside" of my ISP, here's a typical trace route out from my Cox Cable:

traceroute to Google.com (142.250.217.142), 30 hops max, 46 byte packets
 1  10.72.120.1  13.142 ms
 2  100.120.105.84  7.624 ms
 3  100.120.104.6  12.010 ms
 4  68.1.1.13  9.471 ms
 5  72.215.224.173  10.277 ms
 6  *
 7  142.250.226.50  9.474 ms
 8  209.85.249.95  10.251 ms
 9  142.250.217.142  9.535 ms
1 Like

Nomenclature disconnect, sorry. With servers I meant the machines coordinating world state for a game which might need/want to cold call a game client.... so I think we are thinking about the same situation, only I used misleading terms, sorry.

I disagree UDP "communication" unlike TCP is inherently unidirectional and there is no reason to force a cmmunication to use the same port-pair per direction...

Yes, but that match making requires somebody to accept cold calls, and for a peer to peer gaming session without a central communication/matchmaking server you need to open these ports in the firewall... Your approach requires that I at the very least agreed on which ports to use before hand... that is so "old-school" really ephemeral ports should not be used that way you should always negotiate which ports to use from scratch for each unidirectional flow :wink: (I am trying to make a bad argument here and be mildly funny).

That is not that helpful, as MACs restrict you to a single L2 domain... what about multiple levels of routers in a network with rate sharing concerned only at the exit/entry points?

Well, this is where centralized IP assignment via DHCP can come in handy, there you know how many and which addresses a specific machine uses... you can't know with SLAAC and RAs.

That again is a policy question. Sometimes (moare often) that might be what you want, sometimes its not and you might want all containers/VMs NATed behind a single address

Just because there are situations in which multiple addresses per host are desirable, does not logically mean that the opposite (conditions in which the number of addresses needs to be fixed) is not also desirable... I am not arguing against the new capabilities that 128bit addresses bring, just against the sometimes heard notion that now everybody needs to change their ways even if just to accomplish the same as before.

Well, as long as that network presents itself as one entity if asked by policy to do so I have no issues with that. :wink: again, I am not even taking a position which of the alternatives is "better", my point is there is no reason why not all options should be available.

I might be more conservative than I had thought, but I am not willing to ceed human rights to machines.

Yes, just as there are legit needs to keep things under tight control... my argument is still, that the IPv6 "my (new) way or the high-way" approach needlessly prolonged/prolongs the IPv6 roll-out.

Not necessarily, I can assign a /128, and IMHO should be allowed to do so preferably from a central place (at the same time I should only do this with good justification). For example, once IoT goes IPv6 I would like to have a close eye on these as I trust them about as far as I can throw 'em... which is considerably easier if they are easy to track/filter on L3. But that is clearly a policy question and for network policy I consider the local admin to be "in control" and think that the tools should be available to implement any policy desired. I really dislike the "here is a thing that is hard in X" line of reasoning, where proponents of X start telling you you are doing things wrong and need to change your ways substantially only to keep achieve that thing..

That is a good illustration of the point I intended to make: CG-NAT deployment is not consistently using the 100 or the 10 range. Thanks!

It's not human rights ceded to machines, it's the human right of a person such as myself, to be allowed to control what endpoints I own are allowed to do. In the absence of a hard line on artificial scarcity people will wield inappropriate power over other people. For example ISPs will say "hey you want to get on our network? here's your one /128 you're allowed to put ONE device on our network, if you want more pay more per each device, fill out forms in triplicate, stay on hold for a couple days, etc etc".

Human rights wise (ie. the rights of humans who need to be able to communicate with others on this planet) this is unacceptable. There needs to be a section of the address space which is under the control of the end device. This is a political issue not a technological issue. Allowing networks to restrict things through DHCPv6 is basically an unacceptable political situation. The network operator gets to control the network portion of the address namely the first 56 or less bits. The end network / home network administrator gets to control the local network bits (the final 8 or 16 bits of the network space depending on if you have a /56 or /48), and the end device controls the last 64 bits subject to only the restriction that it not abuse this by abiding by DaD.

Anything else will lead to real actual human rights abuses IMHO.

By analogy the "host portion" (lower 64 bits) is like a person's name. Imagine if whenever you entered say a school you had to get permission from the school administrators to use your name? Perhaps at one school they say no, you must be assigned a name from a list, your name on this school will be Frank Gefiltefish, and not only that but that will be the name used by all members of your family at once. Your daughter will also be Frank Gefiltefish and whenever anyone attempts to communicate with anyone else using a name other than the one assigned by administrators, a loud noise will blare blotting out communications. I think it's pretty obvious this is a huge huge human rights violation. The phone you carry in your pocket is just an extension of your own capacity to communicate with others via a unique identity within the world. Blotting out people's ability to have a set of identities they control is a political problem.

I think we are thinking of different situations here. I am arguing about what happens in my network (or by extension in my employers network) I am not talking about how ISPs should deliver IPv6 to their leaf networks/end-customers.

I am not concerned about ISPs not being able to use DHCPv6 I am concerned about IPv6 evangelists claiming that nobody should need DHCPv6 and central coordination in their own domain.

Again, I keep disagreeing, my private network, my private rules (that is different for public networks and networking providers).

My point is that leaf networks might want tp use DHCPv6, and IMHO should be able to do so. From that perspective Google's Android stance is not great. And frankly, the 64bit interface ID is way to precious to just waste on SLAAC...

Again, we seem to be talking past each other to some degree here; as so often I fully agree on the ISP<->leaf-network angle that you describe, but that is not the main use-case why folks complain about lack of DHCPv6 if I understand correctly.

my private school, you are only allowed to be called Frank Gefiltefish while on my campus.

Funny. But again not a good analogy. Because I really see no issues forcing my ideas of how my network should operate on the machines under my control.... as I said machines do not have human rights in my eyes. Customers of an ISP however do, as do employees in a company. but my right are not infringed upon if my work place network either allows SLAAC/RA or DHCPv6 of manually enforces setting of /128s per host&interface. It is not that common protocols like UDP or TCP do not already have methods tp allow multiplexing a hosts IP-address (aka ports).

That again is a different context, my on-line identity is not linked to my phones IP address and if need be I can try to obfuscate my IP address anyway (tor, vpn) this is still no argument why in my network I should not be allowed to assign IPv6 addresses from a central point, some thing that for IPv4 DHCPv4 did reasonably well. But I guess all I can do is note that we do not seem to come to an agreement on this point today. And to avoid repeating the same point over and over again, I thank you for your food for thought and will probably mull it over the next few days resulting in a slightly changed perspective.

1 Like

Yes, I think it's fine to say that there are contexts in which it might be fine to force everyone to use a silly name, for example if you come to my birthday party and everyone must choose a silly name on their name tag, we all agree this is ok... everything is well and good. But if my landlord at my apartment intercepts my paper mail and shreds it unless its addressed to "Tito Puente The 95th" this is a serious problem.

Mobile phones are right on that boundary where they don't logically belong to the owner of the network most of the time, and this is why Google took their stand I believe. They realized that almost always there will be a political conflict between the network owner and the phone owner, who will be two different people, and so they took what seems like a rather extreme position when viewed as purely a technology issue. I don't think it's actually very extreme at all when you realize it goes literally one to one over to the idea of how are letters allowed to be addressed to you? Sure you don't control the name of the country, or of the city, or of the street, nor the street address, but there is a portion you do control: your name or your business name or your Nom De Plume that you publish novels under, or your username under which you publish free software or whatever, there can be many aliases for you, legitimately. Letting a third party enforce a policy on what you call yourself when that third party isn't yourself wearing a network admin hat in your own house is problematic. Phones very often are mobile and the network owner is someone else. The IPv6 address is a publicly disseminated identity, there are legit reasons why absolutely you MUST have more than one. It's not ok to let a third party prevent that.

But yes, I think we've gone far enough for today thanks for playing!

In theory this thread is intended to be a broad discussion of what "blue sky" features are worth developing and stablizing for openwrt, possibly firing up a branch for a while as cerowrt did. I don't think we should shy away from controversial issues and rather enjoyed the ipv6 thread, so long as we can all keep it civil, make jokes, etc, for this style of watercooler conversation.

I would dearly like better ipv6 and ipv6 transition mechanism support. As one example, openwrt does do source specific routing, but the RA mechanism for that got hung up in the ietf. It's really needed when you want failover capability. So I'd like to ressurrect that. Also finding a sane way to deprioritize and/or take away ipv6 addresses that aren't working at that particular moment in time wasn't tightly integrated when last I looked.

I keep encouraging folk to give tailscale a shot. It's really impressive...

Yes and what about IPv6 multihoming. It seems like perhaps NPT is the mechanism of choice there, but I'm not convinced, it still breaks things like SDP or packets that otherwise have IPv6 ip addresses in them, directing traffic.

Is there a good multihoming methodology that doesn't involve being a big organization with your own AS and BGP announcements?

Also how about IPv6 multicast routing beyond a site. We really should be doing global multicast for a variety of things. For example "broadcast television" and perhaps some kind of global service discovery (finding nearby DNS, NTP, or Debian mirror sites eg), or global market data.

I would argue that ship has mostly sailed, people prefer non linear media where ever possible. Where it might make sense is for things that preferably are consumed in real-time, but the only thing I can come up with are sport events.... and these are typically sitting behind pay-walls, not sure worth optimizing global networks for the profit of a few corporations? (Nothing against free enterprise, just questioning the incentive structure here...)

Yes I'm thinking news broadcasts, free (delayed) financial tickers, CVE updates (jeez log4j is a disaster and I wish I'd heard about it before my kids spent hours yesterday on Minecraft. I suspect one of them was hit by it at least) weather alerts, whatever.

There are a wide variety of globally relevant informational bulletins that it would be extremely useful to be able to just "turn on" and start receiving. The alternative is centralizing dissemination of this information. For example suppose you want to be a Debian mirror. So now you are regularly hitting Debian main servers asking what's new. Instead what if you subscribe to "Debian package updates" and when notified of a new package you schedule a process to go get it at a random time later. Instead of 1000 mirrors all hitting Debian a few times a day Debian just sends out a continuous low bandwidth multicast stream... Same for Microsoft windows updates, or Android OTA updates. It seems like my Motorola phones sometimes take months after an update is available to get it... Motorola rolls it out slowly because they have to avoid being hammered by 800,000,000 phones as soon as it's available. Instead just broadcast the latest update in a loop, the phone turns on the multicast stream a couple times a day, checks to see what the version number of the current update is,and if it's newer turns on reception of the binary...

Firmware updates for all sorts of IoT garbage could really improve security without any need to "phone home". Think of IP cameras and thermostats and refrigerators and whatever else. Manufacturers just stream a JSON object that describes the current set of binaries for all their products, and on a separate channel signed binaries.

Not to mention how much radio spectrum could be freed up from specialized broadcasts...

Bittorrent updates have been tried for apt.

uftpd is the only major multicast file transfer protocol i know of still working: http://uftp-multicast.sourceforge.net/

Far to many to make "multicast" a viable idea, really it is an optimization, if you have a considerable fraction of your traffic being identical, but that needs to be balanced against the number of such parallel streams needed for sufficient coverage... it is a gamble against probability to be watched, if on average each such channel gets >= 2 subscribers/viewers multicasting it makes sense, below that rather not purely on rate optimization grounds.

And for data distribution, bit-torrent pretty much took over that function (IIRC even microsoft has a peer-to-peer update distribution system users can opt-in).

+1; but I doubt that tey would get priority in such a scheme, after all they tend to be as cheap as possible and that means cutting all corners, like "support" :wink:

Potentially, but who would be profiting from that mainly?