OpenWrt 22.03.2 frequent disconnects // odhcp6c[5287]: Failed to send SOLICIT message to ff02::1:2 (Network unreachable)

Hi
i´m suffering regular connection dropouts.
I´m guessing its some problem with prefix delegation.
I´ve got a ds lite connection from a german isp.
So ipv6 and ipv4 tunnel are mandatory.

My console shows me the following:

Sun Jan 15 18:44:04 2023 daemon.debug pppd[5812
]: rcvd [LCP TermReq id=0x4]
Sun Jan 15 18:44:04 2023 daemon.info pppd[5812]: LCP terminated by peer
Sun Jan 15 18:44:04 2023 daemon.err odhcp6c[5994]: Failed to send RS (Network unreachable)
Sun Jan 15 18:44:04 2023 daemon.err odhcp6c[5994]: Failed to send RELEASE message to ff02::1:2 (Network unreachable)
Sun Jan 15 18:44:04 2023 daemon.notice netifd: Network device 'pppoe-wan' link is down
Sun Jan 15 18:44:04 2023 daemon.notice netifd: Network alias 'pppoe-wan' link is down
Sun Jan 15 18:44:04 2023 daemon.notice netifd: Interface 'wan_6' has link connectivity loss
Sun Jan 15 18:44:04 2023 daemon.debug pppd[5812]: Script /lib/netifd/ppp-down started (pid 6407)
Sun Jan 15 18:44:04 2023 daemon.debug pppd[5812]: sent [LCP TermAck id=0x4]
Sun Jan 15 18:44:04 2023 daemon.notice pppd[5812]: Modem hangup
Sun Jan 15 18:44:04 2023 daemon.notice pppd[5812]: Connection terminated.
Sun Jan 15 18:44:04 2023 daemon.info pppd[5812]: Connect time 5.6 minutes.
Sun Jan 15 18:44:04 2023 daemon.info pppd[5812]: Sent 129525205 bytes, received 627894189 bytes.
Sun Jan 15 18:44:04 2023 kern.warn kernel: [14127.210208] ip6_tunnel: ds-wan_6_4 xmit: Local address not yet configured!
Sun Jan 15 18:44:04 2023 kern.warn kernel: [14127.230158] ip6_tunnel: ds-wan_6_4 xmit: Local address not yet configured!
Sun Jan 15 18:44:04 2023 daemon.notice netifd: Interface 'wan' has lost the connection
Sun Jan 15 18:44:04 2023 daemon.notice netifd: wan_6 (5994): Command failed: ubus call network.interface notify_proto { "action": 0, "link-up": false, "keep": false, "interface": "wan_6" } (Permission denied)
Sun Jan 15 18:44:04 2023 daemon.warn dnsmasq[1]: no servers found in /tmp/resolv.conf.d/resolv.conf.auto, will retry
Sun Jan 15 18:44:04 2023 kern.warn kernel: [14127.250149] ip6_tunnel: ds-wan_6_4 xmit: Local address not yet configured!
Sun Jan 15 18:44:04 2023 kern.warn kernel: [14127.260174] ip6_tunnel: ds-wan_6_4 xmit: Local address not yet configured!
Sun Jan 15 18:44:04 2023 kern.warn kernel: [14127.270084] ip6_tunnel: ds-wan_6_4 xmit: Local address not yet configured!
Sun Jan 15 18:44:04 2023 kern.warn kernel: [14127.272192] ip6_tunnel: ds-wan_6_4 xmit: Local address not yet configured!
Sun Jan 15 18:44:04 2023 daemon.debug pppd[5812]: Script /lib/netifd/ppp-down finished (pid 6407), status = 0x1
Sun Jan 15 18:44:04 2023 daemon.info pppd[5812]: Exit.
Sun Jan 15 18:44:04 2023 kern.warn kernel: [14127.290124] ip6_tunnel: ds-wan_6_4 xmit: Local address not yet configured!
Sun Jan 15 18:44:04 2023 daemon.notice netifd: Interface 'wan' is now down
Sun Jan 15 18:44:04 2023 daemon.notice netifd: Interface 'wan' is setting up now
Sun Jan 15 18:44:04 2023 kern.warn kernel: [14127.310148] ip6_tunnel: ds-wan_6_4 xmit: Local address not yet configured!
Sun Jan 15 18:44:04 2023 daemon.err insmod: module is already loaded - slhc
Sun Jan 15 18:44:04 2023 daemon.err insmod: module is already loaded - ppp_generic
Sun Jan 15 18:44:04 2023 kern.warn kernel: [14127.329434] ip6_tunnel: ds-wan_6_4 xmit: Local address not yet configured!
Sun Jan 15 18:44:04 2023 daemon.err insmod: module is already loaded - pppox
Sun Jan 15 18:44:04 2023 daemon.err insmod: module is already loaded - pppoe
Sun Jan 15 18:44:04 2023 kern.warn kernel: [14127.349410] ip6_tunnel: ds-wan_6_4 xmit: Local address not yet configured!
Sun Jan 15 18:44:04 2023 daemon.info pppd[6452]: Plugin pppoe.so loaded.
Sun Jan 15 18:44:04 2023 daemon.info pppd[6452]: PPPoE plugin from pppd 2.4.9
Sun Jan 15 18:44:04 2023 daemon.notice pppd[6452]: pppd 2.4.9 started by root, uid 0
Sun Jan 15 18:44:04 2023 daemon.debug pppd[6452]: Send PPPOE Discovery V1T1 PADI session 0x0 length 12
Sun Jan 15 18:44:04 2023 daemon.debug pppd[6452]:  dst ******** src ********
Sun Jan 15 18:44:04 2023 daemon.debug pppd[6452]:  [service-name] [host-uniq  34 19 00 00]
Sun Jan 15 18:44:04 2023 daemon.debug pppd[6452]: Recv PPPOE Discovery V1T1 PADO session 0x0 length 30
Sun Jan 15 18:44:04 2023 daemon.debug pppd[6452]:  dst ********  src ********
Sun Jan 15 18:44:04 2023 daemon.debug pppd[6452]:  [service-name] [host-uniq  34 19 00 00] [AC-name ber0304aihk001]
Sun Jan 15 18:44:04 2023 daemon.debug pppd[6452]: Send PPPOE Discovery V1T1 PADR session 0x0 length 12
Sun Jan 15 18:44:04 2023 daemon.debug pppd[6452]:  dst ********  src ********
Sun Jan 15 18:44:04 2023 daemon.debug pppd[6452]:  [service-name] [host-uniq  34 19 00 00]
Sun Jan 15 18:44:04 2023 daemon.debug pppd[6452]: Recv PPPOE Discovery V1T1 PADO session 0x0 length 30
Sun Jan 15 18:44:04 2023 daemon.debug pppd[6452]:  dst ********  src ********
Sun Jan 15 18:44:04 2023 daemon.debug pppd[6452]:  [service-name] [host-uniq  34 19 00 00] [AC-name ber0304aihk001]
Sun Jan 15 18:44:04 2023 daemon.debug pppd[6452]: Recv PPPOE Discovery V1T1 PADS session 0x7fd2 length 12
Sun Jan 15 18:44:04 2023 daemon.debug pppd[6452]:  dst ********  src ********
Sun Jan 15 18:44:04 2023 daemon.debug pppd[6452]:  [service-name] [host-uniq  34 19 00 00]
Sun Jan 15 18:44:04 2023 daemon.debug pppd[6452]: PADS: Service-Name: ''
Sun Jan 15 18:44:04 2023 daemon.info pppd[6452]: PPP session is 32722
Sun Jan 15 18:44:04 2023 daemon.warn pppd[6452]: Connected to ******** via interface eth0.7
Sun Jan 15 18:44:04 2023 daemon.debug pppd[6452]: using channel 6
Sun Jan 15 18:44:04 2023 kern.info kernel: [14127.585604] pppoe-wan: renamed from ppp0
Sun Jan 15 18:44:04 2023 daemon.info pppd[6452]: Renamed interface ppp0 to pppoe-wan
Sun Jan 15 18:44:04 2023 daemon.info pppd[6452]: Using interface pppoe-wan
Sun Jan 15 18:44:04 2023 daemon.notice pppd[6452]: Connect: pppoe-wan <--> eth0.7
Sun Jan 15 18:44:04 2023 daemon.debug pppd[6452]: sent [LCP ConfReq id=0x1 <mru 1492> <magic 0x81c43a76>]
Sun Jan 15 18:44:04 2023 daemon.debug pppd[6452]: rcvd [LCP ConfAck id=0x1 <mru 1492> <magic 0x81c43a76>]
Sun Jan 15 18:44:04 2023 daemon.err odhcp6c[5994]: Failed to send SOLICIT message to ff02::1:2 (Network unreachable)
Sun Jan 15 18:44:04 2023 daemon.notice netifd: Interface 'wan_6' is now down
Sun Jan 15 18:44:04 2023 daemon.notice netifd: Interface 'wan_6_4' has lost the connection
Sun Jan 15 18:44:04 2023 daemon.notice netifd: Interface 'wan_6' is disabled
Sun Jan 15 18:44:04 2023 daemon.notice netifd: tunnel 'ds-wan_6_4' link is down
Sun Jan 15 18:44:04 2023 daemon.notice netifd: Interface 'wan_6_4' is now down
Sun Jan 15 18:44:04 2023 daemon.notice netifd: Interface 'wan_6_4' is setting up now
Sun Jan 15 18:44:04 2023 daemon.info dnsmasq[1]: read /etc/hosts - 4 addresses
Sun Jan 15 18:44:04 2023 daemon.info dnsmasq[1]: read /tmp/hosts/dhcp.cfg01411c - 2 addresses
Sun Jan 15 18:44:04 2023 daemon.info dnsmasq[1]: read /tmp/hosts/odhcpd - 2 addresses
Sun Jan 15 18:44:04 2023 daemon.info dnsmasq-dhcp[1]: read /etc/ethers - 0 addresses
Sun Jan 15 18:44:05 2023 daemon.notice netifd: Interface 'wan_6_4' is now down

My interfaces are configured as follows.

config interface 'loopback'
	option device 'lo'
	option proto 'static'
	option ipaddr '127.0.0.1'
	option netmask '255.0.0.0'

config globals 'globals'
	option ula_prefix 'fda5:c390:1b33::/48'

config device
	option name 'br-lan'
	option type 'bridge'
	list ports 'eth1.1'

config interface 'lan'
	option device 'br-lan'
	option proto 'static'
	option ipaddr '192.168.1.1'
	option netmask '255.255.255.0'
	option ip6assign '60'

config switch
	option name 'switch0'
	option reset '1'
	option enable_vlan '1'

config switch_vlan
	option device 'switch0'
	option vlan '1'
	option ports '1 2 3 4 6t'
	option vid '1'

config switch_vlan
	option device 'switch0'
	option vlan '2'
	option ports '0t 5'
	option vid '7'

config device
	option type '8021q'
	option ifname 'eth0'
	option vid '7'
	option name 'eth0.7'
	option mtu '1500'
	option mtu6 '1500'
	option macaddr 'CC:40:D0:56:15:46'

config interface 'wan'
	option proto 'pppoe'
	option device 'eth0.7'
	option username '****@online.de'
	option password '****'
	option ipv6 'auto'
	option pppd_options 'debug'
	option keepalive '100 60'

config interface 'Modem_Dreytek'
	option proto 'static'
	option ipaddr '192.168.100.2'
	option netmask '255.255.255.0'
	option device 'eth0.7'
	option gateway '192.168.100.1'

config device
	option name 'ds-wan_6_4'
	option mtu '1452'
	option mtu6 '1452'

Tryed to debug it myself for severel days now, beeing new to networking i´m clueless how to proceed further.

Seems that your ISP terminated the connection.

it looks similar to my case: Possible regression in pppd
question is why, as my ISP said they do not set any session timeout to force reconnect for example. so as our hardware is different only owrt version is the same maybe a regression after all.

Our cases seem similar, but I get those disconnects in roughly every 10 mins but not daily.
Few days it work like a charm and the next day it is completely unusable.

I will try to revert back to 21.02.5 and see what happens.

I guess nobody spotted some mistake in my config breaking my setup?

Thanks for your help.

Update after several days without issues.

Going back to 21.02.5 has fixed my problems completely.

AND it seems that prefix delegation needs a dhcpv6 interface to work properly.
The default dhcpv6 interface does the job for me.
Can´t say if it is the proper configuration, but it is working for now.

Not true. I have a static interface and can assign prefixes from it using SLAAC only (obviously, you must properly specify the PD).

So I'm not sure for your case - if you mean to be assigned a PD dynamically, that answer is yes it's required to run DHCPv6 Client.

(I realize now that you may not have been aware of that.)

Ok, thanks for clarifying that.
I think what got me off the track is, that I was running my previous setup without a DHCPv6 Client,
but with the DS-Lite interface that comes with the package..
I guess the DS-Lite one does a similar thing then.

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.