Having set a management iface (mgt) such is not reachable on ipv6 but ipv4 only when there is no client connected to the mgt iface and dmesg reports:
IPv6: ADDRCONF(NETDEV_UP): br-mgt: link is not ready
Only once a client gets connected to the mgt interface and the link status changes to UP the respective ipv6 address becomes accessible.
This seems odd because there is no such issue via ipv4. To illustrate the issue - connected to the router via the lan iface and trying to access the mgt iface (with no client connected):
8: lan3@eth1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue master br-mgt state LOWERLAYERDOWN mode DEFAULT group default qlen 1000
23: br-mgt: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000
All of the above indicating that the iface is up and running, except for the link state. And the ipv4 address is accessible:
ping 192.168.112.12
PING 192.168.112.12 (192.168.112.12) 56(84) bytes of data.
64 bytes from 192.168.112.12: icmp_req=1 ttl=64 time=0.061 ms
64 bytes from 192.168.112.12: icmp_req=2 ttl=64 time=0.067 ms
but its equivalent ipv6 address is not however
ping6 fd30:d64c:1eed:4c3a::12
PING fd30:d64c:1eed:4c3a::12(fd30:d64c:1eed:4c3a::12) 56 data bytes
From fd30:d64c:1eed:8f8d:9c50:7976:414e:72e3 icmp_seq=1 Destination unreachable: Address unreachable
From fd30:d64c:1eed:8f8d:9c50:7976:414e:72e3 icmp_seq=2 Destination unreachable: Address unreachable
ping6 fd30:d64c:1eed:4c3a:6133:a122:a5ca:7b4a
PING fd30:d64c:1eed:4c3a:6133:a122:a5ca:7b4a(fd30:d64c:1eed:4c3a:6133:a122:a5ca:7b4a) 56 data bytes
From fd30:d64c:1eed:8f8d:9c50:7976:414e:72e3 icmp_seq=1 Destination unreachable: Address unreachable
From fd30:d64c:1eed:8f8d:9c50:7976:414e:72e3 icmp_seq=2 Destination unreachable: Address unreachable
Why would the ipv6 address of the mgt inface only reachable when a client is connected to that iface and thus the link state is UP and yet it does not matter for its ipv4 address?
I am afraid that does not explain (to me) why the iface is reachable over ipv4 but not ipv6 - it would seem illogical - either the iface is reachable (both ipv4 and ipv6) or not (both ipv4 and ipv6), but not one way and not the other.
For the lan3 (physical port) that would make sense but for the (virtual/bridge) mgt iface it should not matter whether it has a carrier or not, as it does not for ipv4
It works the same on Fedora Linux, so it's likely not an OpenWrt-specific feature/issue.
You can try raise the issue on https://bugzilla.kernel.org/ if you are sure in your point.
Or you can add the route manually if you just want to ping IPv6 on that interface.
# ip addr show dev enp3s0
2: enp3s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
link/ether c8:5b:76:f8:d2:68 brd ff:ff:ff:ff:ff:ff
inet 192.168.2.1/24 scope global enp3s0
valid_lft forever preferred_lft forever
inet6 fd12::34/64 scope global tentative
valid_lft forever preferred_lft forever
# ip route show type local table all dev enp3s0
local 192.168.2.1 table local proto kernel scope host src 192.168.2.1
# ping -q -c 3 fd12::34
PING fd12::34(fd12::34) 56 data bytes
From fdd1:5b88:f9d6::cf9: icmp_seq=1 Destination unreachable: Address unreachable
From fdd1:5b88:f9d6::cf9: icmp_seq=2 Destination unreachable: Address unreachable
From fdd1:5b88:f9d6::cf9: icmp_seq=3 Destination unreachable: Address unreachable
--- fd12::34 ping statistics ---
3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 27ms
pipe 3
# ip route add local fd12::34 dev enp3s0 table local proto kernel scope host
# ip route show type local table all dev enp3s0
local 192.168.2.1 table local proto kernel scope host src 192.168.2.1
local fd12::34 table local proto kernel metric 1024 pref medium
# ping -q -c 3 fd12::34
PING fd12::34(fd12::34) 56 data bytes
--- fd12::34 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 54ms
rtt min/avg/max/mdev = 0.087/0.093/0.100/0.012 ms
Do you understand now?
From another point, you can:
Check sysctl net.ipv6 and perhaps find some parameter to adjust the behavior.
force_link boolean no 1 for protocol static , else 0 Specifies whether ip address, route, and optionally gateway are assigned to the interface regardless of the link being active ('1') or only after the link has become active ('0'); when set to '1', carrier sense events do not invoke hotplug handlers
@mikma, loopback is mentioned right above the quoted line.
And force_link barely can help, because while the state is NO-CARRIER, there're both IPv4 and IPv6 routes in table main, but only IPv4 route in table local.
force_link does not make a difference, as @vgaetera already pointed out.
Loopback is not really designed/designated/good practice for being a management interface that needs to be reachable and bridged with a physical lan port.
checked sysctl net.ipv6 option but there is nothing apparent (to me) that would a make difference, considering the lack of an ipv6 route in the local table in the first place.
ip route add local fd30:d64c:1eed:4c3a::12 dev br-mgt table local proto kernel scope host
gets the iface reachable via ping6 and traceroute but I still cannot bind any services, e.g. sshd, unbound, lighttpd, to that ipv6 address.
sshd[19791]: error: Bind to port 43078 on fd30:d64c:1eed:4c3a::12 failed: Address not available.
whilst ping6 fd30:d64c:1eed:4c3a::12
PING fd30:d64c:1eed:4c3a::12(fd30:d64c:1eed:4c3a::12) 56 data bytes
64 bytes from fd30:d64c:1eed:4c3a::12: icmp_seq=1 ttl=64 time=0.074 ms
64 bytes from fd30:d64c:1eed:4c3a::12: icmp_seq=2 ttl=64 time=0.084 ms
64 bytes from fd30:d64c:1eed:4c3a::12: icmp_seq=3 ttl=64 time=0.086 ms
^C
--- fd30:d64c:1eed:4c3a::12 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2094ms
rtt min/avg/max/mdev = 0.074/0.081/0.086/0.009 ms
Currently I run all those service on ipv4 on that mgt interface and would like to make the transition to ipv6 but that seems somewhat infeasible.
Interface binding is an additional point of failure and more so for critical services such as SSH.
See default OpenWrt configuration for dropbear and uhttpd, neither one uses interface binding.
Properly configured firewall should be enough for access control.
I had never any issue with ipv4 binding of services. Suppose that perspective might differ by the user and discussing pro/cons of ip binding is beside the thread topic.
Thus far I have not found a specification that says ipv4 route for bridge iface is to be available with No Carrier state but not for ipv6