First off I'll say that I'm no expert on networking and I've tried to read as much as possible, but without having found a solution for over 6 months I now decided to register here in an attempt to get help by you people who are more knowledgable than me. @dlakelan@moeller0 please come and save me
I've had problems with latency spikes and request timed out running Openwrt on my Linksys wrt3200. It shows the most during games were it freezes completely followed by rubber banding to another place in the game when it gets connection again. This happens independent on the time, during the day, night, week, weekend.
When pinging google.com from the desktop computer (connected with cable to the router) it shows a stable 10-11ms most of the time, but 1-15 times an hour spike between 100-3000ms. Sometimes this is followed by "81.17x.xxx.xxx: Destination host unreachable" and other times simply "Request timed out".
When ssh into the router from a laptop (via wifi) were I've screen recorded the typical behaviour of the freezes it goes like this 14ms -> 2769ms -> 1745ms -> 721ms -> 14ms
This wile having a terminal window with both htop and "top -d 1" showing 0% irq, 0% sirq, idle 99%.
This started happening on 18.06 (possibly even 18.04) and I've tried running 19.07 and as of writing this have 19.07.1 installed. I'm running 100/100 fiber with the router directly connected to the wall socket/copper (no dsl/vdsl/modem), I've tried changing cables (have 4 in total with 2 new ones, all cat 5e).
The symptoms has appeared on completely fresh installs as well as with sqm (cake and fq_codel) but right now the only thing I have installed is htop.
To give an overview:
Stock firmware when buying it - no problem
Installing Openwrt and running it for at least 1 year - no problem
Problem starting to occur in Openwrt
Reverting back to stock firmware - no problem
Installing stock firmware (Ver. 1.0.8.198828, release date 1/8/2020) - no problem
Clean install 18.06 - problem
Clean install 19.07 - problem
Clean install 19.07.1 - problem
Computer connected directly to wall - no problem
I have never had any problems with wrt3200 or heard about incompatibilities etc, except with wifi issues. Are you testing this from the router itself, from a LAN machine with ethernet, or from a LAN machine with wifi?
If it's wifi... I'm not sure, could be a briefly dropped connection or some such... But otherwise it should be fine:
14ms -> 2769ms -> 1745ms -> 721ms -> 14ms
that looks like bufferbloat, with the buffers slowly draining... rather than just a disconnect.
This part is from a LAN machine, win10 desktop computer with ethernet (connected to the router).
This part is from LAN machine with wifi, a linux laptop running the above commands from the router itself.
Both of the events described are captured simultaneously (I'm ssh:ed into the router with my laptop running htop and ping google) while playing a game and experience freeze lag/latency spike on the wired desktop computer (were I also have a cmd with ping -t google open)
I don't have any SQM right now as I did a fresh install of 19.07.1 yesterday, but this is the results from dslreports without it http://www.dslreports.com/speedtest/59164016
When I've had sqm set up previously it's been a little all over the place regarding the setup as I've had the router as a wireguard client (https://mullvad.net/en/help/running-wireguard-router), I've always gotten A+ and had the sqm on the wireguard interface. Maybe I'm a little off topic now as neither wireguard or sqm are in the picture now, it's a later problem
these pings do they go over the wireguard interface? maybe your VPN provider is really to blame, it could be dropping the tunnel and then renegotiating etc?
I don't have any wireguard interface/vpn installed or active at all right now (at the time of recording the latency spikes/packet drops described in the first post). Wireguard on the router is my usual setup, but during all the testing described above (installing 18.06, 19.07, 19.07.1, stock firmware etc) it's been completely vanilla without any vpn connection. It was my first thought also that the vpn was to blame, or that the routern couldn't handle sqm and wireguard at the same time even on a 100/100 connection, but since I'm now on clean install it can't possibly be connected to vpn/misconfigured sqm.
I'll try to disable both radios/wifi tomorrow and see if it makes a difference but from my memory the problem persisted when trying it yesterday (also since I'm running the gaming pc through ethernet it's far fetched but I'm really out of ideas here)
your dslreports example shows spikes in upload that look like bufferbloat. can you set up SQM at say 80Mbps each way (to be sure we're below your ISP rate) and run dslreports test again?
you can only control bufferbloat at the bottleneck, if your ISP sometimes bottlenecks in one of their branch or core routers you are out of luck...
Really interested to hear the results of the wifi disabled test...
Also the bufferbloat plots show spikes up to ~500 ms in that short test-interval (with 242 at the beginning of the idle test) understandably "real-time" control in FPS games is not going to be fun.
Also mtr is quite a decent tool for monitoring latency (it is a combination of traceroute and ping that will continuously traceroute and report current, average, maximum and minimum RTT as well as standard deviation, in your case comparing minimum and maximum should be helpful. Mtr can be installed and run from your router (opkg update ; opkg install mtr) and using screen (opkg install screen) you can start this before starting tests and disconnet with the monitoring host and later reconnect and look at the results (log into the router via ssh, screen -RR, in the new screen context do e.g. mtr 1.1.1.1 (or any other host you used for testing), the press CTRL-A, CTRL-D to disconnect from the srcreen session and exit to ed the ssh connection, do your test, and log back into the router and use screen -RR to reconnect to the still running screen session with a running mtr and have a look at the columns with min and max).
I've disabled both radios (raido0 and radio1, radio2 has never been active) but the problem still persist. I'll setup SQM to try now and have installed what you've suggested. Maybe I messed something up but logging back in after getting Destination host unreachable in cmd (is this a packet loss or something else?) and several high ms spikes later, doing screen -RR again now I only see the following:
(no route to host) ?????????????????????????????????????? Scale: .:820ms 1:914ms 2:1071ms 3:1291ms a:1575ms b:1921ms c:2330ms
Tell me if something is unclear/confusing how I express myself, I'm not all familiar with the terminology
That indicates that 1.1.1.1 is not reachable and hence mtr has nothing to display and we have no data to diagnose your issue...
That is odd, but if you use mtr -z -e -b google.com what do you get then?
Mmmh, I would really like to see the full mtr output (it is fine to keep redacting your IP address as you did above, I just want to see the whole display) Also try to run mtr interactively (without the -c 100 and -w arguments) to see what is going on.
This is without sqm installed but I'll move forward by installing it now. Also the time running mtr I didn't get any complete freeze/halt (as when destination host unreachable or Request timed out is shown during pinging), but I did notice random spike of 700ms, 1156ms and so on. It's all very random (in my mind) when the spikes or complete freezes occur.
Thanks, the worst column seems to indicate that at latest from Hop 3 worst case latencies are though the roof. Te next step is to install and run mtr/winmtr (https://github.com/White-Tiger/WinMTR) on a host computer in your internal network and repeat the test, but interactively and try to see whether all hops show more or less bad RTTs are around the same times. (Then copy and paste from the mtr/winmtr command terminal/window before exiting).
I would guess that bad latencies will already start at your router, but it is worth confirming that as in the off-chance that it is an issue with ISP gear you have little chance of fixing that from your side.
Also, it would be interesting to see the redacted output of dmesg and logread from your router shortly after one of these latency spikes (the last say 20 lines of each, carefully redacted should be sufficient).
Reply from 216.58.207.238: bytes=32 time=10ms TTL=52
Reply from 81.170.xxx.xxx: Destination host unreachable.
Reply from 216.58.207.238: bytes=32 time=2152ms TTL=52
rendering my mtr test blank (no route to host) but here is the WINMTR:
WinMTR Statistics
WinMTR statistics
Host
%
Sent
Recv
Best
Avrg
Wrst
Last
OpenWrt.lan
0
882
882
0
0
4
0
Request timed out.
100
177
0
0
0
0
0
Request timed out.
100
177
0
0
0
0
0
a258-gw.bahnhof.net
1
880
879
0
3
2144
1
ume-ftp-dr1.svl-cr1.bahnhof.net
1
879
878
4
7
2148
5
svl-cr1.sto-cr1.bahnhof.net
1
880
879
10
12
2153
12
sto-cr1.sto-ixa-er1.bahnhof.net
1
878
877
10
10
21
10
72.14.211.124
1
879
878
10
12
2154
10
Request timed out.
100
177
0
0
0
0
0
74.125.252.62
1
879
878
12
15
2156
13
209.85.246.27
1
878
877
11
14
2156
11
arn11s04-in-f14.1e100.net
1
878
877
10
13
2154
10
If you want I can re-run the mtr+WinMTR again?
EDIT:
Here is with a small spike of 300ms (it's difficult to capture multiple spikes without one including Destination host unreachable rendering the mtr test blank.
it suggests that your upstream router doesn't know how to route... like the ISP has flapping routes (ISPs use dynamic routing protocols, when routers become available and then unavailable and then available etc it's called "flapping")
I wonder if your ISP has a loose cable somewhere...
You could try adding -G 10 to the mtr invocation (from mtr --help: "-G, --gracetime SECONDS number of seconds to wait for responses") as it seems that mtr is thrown off balance by the massive RTT spikes....
But the point is the spikes do not affect your traffic to your router, but seem to start a258-gw.bahnhof.net, so it seems the issue is restricted to the wan interface, progress.
Well, or the OP's router, that is why logread and dmesg output would be interesting (also the redacted output of ifstatus wan).
This is directly after a 1000ms spike (I haven't been able to capture the full freeze yet). Should I delete everything besides the 20 last lines for both dmesg and logread?
> Sun Feb 2 14:24:51 2020 daemon.notice procd: /etc/init.d/network: 'radio0' is disabled
> Sun Feb 2 14:24:51 2020 daemon.notice procd: /etc/init.d/network: 'radio1' is disabled
> Sun Feb 2 14:24:51 2020 daemon.notice procd: /etc/init.d/network: 'radio2' is disabled
> Sun Feb 2 14:24:51 2020 daemon.notice procd: /etc/init.d/network: 'radio0' is disabled
> Sun Feb 2 14:24:51 2020 daemon.notice procd: /etc/init.d/network: 'radio1' is disabled
> Sun Feb 2 14:24:51 2020 daemon.notice procd: /etc/init.d/network: 'radio2' is disabled
> Sun Feb 2 14:24:51 2020 user.notice ucitrack: Setting up /etc/config/network reload dependency on /etc/config/dhcp
> Sun Feb 2 14:24:51 2020 user.notice ucitrack: Setting up /etc/config/network reload dependency on /etc/config/radvd
> Sun Feb 2 14:24:51 2020 user.notice ucitrack: Setting up /etc/config/wireless reload dependency on /etc/config/network
> Sun Feb 2 14:24:51 2020 user.notice ucitrack: Setting up /etc/config/firewall reload dependency on /etc/config/luci-splash
> Sun Feb 2 14:24:51 2020 user.notice ucitrack: Setting up /etc/config/firewall reload dependency on /etc/config/qos
> Sun Feb 2 14:24:51 2020 user.notice ucitrack: Setting up /etc/config/firewall reload dependency on /etc/config/miniupnpd
> Sun Feb 2 14:24:51 2020 user.notice ucitrack: Setting up /etc/config/dhcp reload dependency on /etc/config/odhcpd
> Sun Feb 2 14:24:51 2020 user.notice ucitrack: Setting up non-init /etc/config/fstab reload handler: /sbin/block mount
> Sun Feb 2 14:24:51 2020 user.notice ucitrack: Setting up /etc/config/system reload trigger for non-procd /etc/init.d/led
> Sun Feb 2 14:24:52 2020 user.notice ucitrack: Setting up /etc/config/system reload dependency on /etc/config/luci_statistics
> Sun Feb 2 14:24:52 2020 user.notice ucitrack: Setting up /etc/config/system reload dependency on /etc/config/dhcp
> Sun Feb 2 14:24:52 2020 kern.info kernel: [ 16.517331] mvneta f1034000.ethernet eth0: configuring for fixed/sgmii link mode
> Sun Feb 2 14:24:52 2020 kern.info kernel: [ 16.524862] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
> Sun Feb 2 14:24:52 2020 kern.info kernel: [ 16.530779] mvneta f1034000.ethernet eth0: Link is Up - 1Gbps/Full - flow control off
> Sun Feb 2 14:24:52 2020 kern.info kernel: [ 16.538656] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
> Sun Feb 2 14:24:52 2020 kern.info kernel: [ 16.547346] br-lan: port 1(eth0.1) entered blocking state
> Sun Feb 2 14:24:52 2020 kern.info kernel: [ 16.552799] br-lan: port 1(eth0.1) entered disabled state
> Sun Feb 2 14:24:52 2020 kern.info kernel: [ 16.558341] device eth0.1 entered promiscuous mode
> Sun Feb 2 14:24:52 2020 kern.info kernel: [ 16.563169] device eth0 entered promiscuous mode
> Sun Feb 2 14:24:52 2020 kern.info kernel: [ 16.569828] br-lan: port 1(eth0.1) entered blocking state
> Sun Feb 2 14:24:52 2020 kern.info kernel: [ 16.575270] br-lan: port 1(eth0.1) entered forwarding state
> Sun Feb 2 14:24:52 2020 kern.info kernel: [ 16.580937] IPv6: ADDRCONF(NETDEV_UP): br-lan: link is not ready
> Sun Feb 2 14:24:52 2020 daemon.notice netifd: Interface 'lan' is enabled
> Sun Feb 2 14:24:52 2020 daemon.notice netifd: Interface 'lan' is setting up now
> Sun Feb 2 14:24:52 2020 daemon.notice netifd: Interface 'lan' is now up
> Sun Feb 2 14:24:52 2020 daemon.notice netifd: Interface 'loopback' is enabled
> Sun Feb 2 14:24:52 2020 daemon.notice netifd: Interface 'loopback' is setting up now
> Sun Feb 2 14:24:52 2020 daemon.notice netifd: Interface 'loopback' is now up
> Sun Feb 2 14:24:52 2020 kern.info kernel: [ 16.593585] mvneta f1070000.ethernet eth1: configuring for fixed/rgmii-id link mode
> Sun Feb 2 14:24:52 2020 kern.info kernel: [ 16.601733] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready
> Sun Feb 2 14:24:52 2020 kern.info kernel: [ 16.607747] mvneta f1070000.ethernet eth1: Link is Up - 1Gbps/Full - flow control off
> Sun Feb 2 14:24:52 2020 daemon.notice netifd: Interface 'wan' is enabled
> Sun Feb 2 14:24:52 2020 daemon.notice netifd: Interface 'wan6' is enabled
> Sun Feb 2 14:24:52 2020 daemon.notice netifd: bridge 'br-lan' link is up
> Sun Feb 2 14:24:52 2020 daemon.notice netifd: Interface 'lan' has link connectivity
> Sun Feb 2 14:24:52 2020 daemon.notice netifd: Network device 'eth0' link is up
> Sun Feb 2 14:24:52 2020 daemon.notice netifd: VLAN 'eth0.1' link is up
> Sun Feb 2 14:24:52 2020 kern.info kernel: [ 16.620115] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
> Sun Feb 2 14:24:52 2020 daemon.notice netifd: Network device 'lo' link is up
> Sun Feb 2 14:24:52 2020 daemon.notice netifd: Interface 'loopback' has link connectivity
> Sun Feb 2 14:24:52 2020 daemon.notice netifd: Network device 'eth1' link is up
> Sun Feb 2 14:24:52 2020 daemon.notice netifd: VLAN 'eth1.2' link is up
> Sun Feb 2 14:24:52 2020 daemon.notice netifd: Interface 'wan' has link connectivity
> Sun Feb 2 14:24:52 2020 daemon.notice netifd: Interface 'wan' is setting up now
> Sun Feb 2 14:24:52 2020 daemon.notice netifd: Interface 'wan6' has link connectivity
> Sun Feb 2 14:24:52 2020 daemon.notice netifd: Interface 'wan6' is setting up now
> Sun Feb 2 14:24:52 2020 daemon.notice procd: /etc/rc.d/S96led: setting up led WAN
> Sun Feb 2 14:24:52 2020 daemon.notice procd: /etc/rc.d/S96led: setting up led USB 1
> Sun Feb 2 14:24:52 2020 daemon.notice procd: /etc/rc.d/S96led: setting up led USB 2
> Sun Feb 2 14:24:52 2020 user.notice firewall: Reloading firewall due to ifup of lan (br-lan)
> Sun Feb 2 14:24:52 2020 daemon.notice procd: /etc/rc.d/S96led: setting up led USB 2 SS
> Sun Feb 2 14:24:52 2020 daemon.notice netifd: wan (2098): udhcpc: started, v1.30.1
> Sun Feb 2 14:24:52 2020 daemon.notice netifd: wan (2098): udhcpc: sending discover
> Sun Feb 2 14:24:52 2020 daemon.err odhcp6c[2097]: Failed to send RS (Address not available)
> Sun Feb 2 14:24:52 2020 daemon.notice netifd: wan (2098): udhcpc: sending select for 81.170.243.177
> Sun Feb 2 14:24:52 2020 daemon.notice procd: /etc/rc.d/S99bootcount: Boot count sucessfully reset to zero.
> Sun Feb 2 14:24:52 2020 daemon.notice netifd: wan (2098): udhcpc: lease of 81.170.xxx.xxx obtained, lease time 86400
> Sun Feb 2 14:24:52 2020 daemon.info procd: - init complete -
> Sun Feb 2 14:24:52 2020 daemon.notice netifd: Interface 'wan' is now up
> Sun Feb 2 14:24:52 2020 daemon.info dnsmasq[1702]: reading /tmp/resolv.conf.auto
> Sun Feb 2 14:24:52 2020 daemon.info dnsmasq[1702]: using local addresses only for domain test
> Sun Feb 2 14:24:52 2020 daemon.info dnsmasq[1702]: using local addresses only for domain onion
> Sun Feb 2 14:24:52 2020 daemon.info dnsmasq[1702]: using local addresses only for domain localhost
> Sun Feb 2 14:24:52 2020 daemon.info dnsmasq[1702]: using local addresses only for domain local
> Sun Feb 2 14:24:52 2020 daemon.info dnsmasq[1702]: using local addresses only for domain invalid
> Sun Feb 2 14:24:52 2020 daemon.info dnsmasq[1702]: using local addresses only for domain bind
> Sun Feb 2 14:24:52 2020 daemon.info dnsmasq[1702]: using local addresses only for domain lan
> Sun Feb 2 14:24:52 2020 daemon.info dnsmasq[1702]: using nameserver 213.80.98.2#53
> Sun Feb 2 14:24:52 2020 daemon.info dnsmasq[1702]: using nameserver 213.80.101.3#53
> Sun Feb 2 14:24:52 2020 user.notice firewall: Reloading firewall due to ifup of wan (eth1.2)
> Sun Feb 2 14:24:52 2020 daemon.err odhcp6c[2097]: Failed to send DHCPV6 message to ff02::1:2 (Address not available)
> Sun Feb 2 14:24:53 2020 kern.info kernel: [ 17.609963] IPv6: ADDRCONF(NETDEV_CHANGE): br-lan: link becomes ready
> Sun Feb 2 14:24:53 2020 daemon.err odhcpd[1822]: Failed to send to ff02::1%lan@br-lan (Address not available)
> Sun Feb 2 14:24:53 2020 daemon.err odhcp6c[2097]: Failed to send DHCPV6 message to ff02::1:2 (Address not available)
> Sun Feb 2 14:24:56 2020 daemon.err procd: unable to find /sbin/ujail: No such file or directory (-1)
> Sun Feb 2 14:24:56 2020 daemon.info dnsmasq[1702]: exiting on receipt of SIGTERM
> Sun Feb 2 14:24:56 2020 daemon.info dnsmasq[2643]: started, version 2.80 cachesize 150
> Sun Feb 2 14:24:56 2020 daemon.info dnsmasq[2643]: DNS service limited to local subnets
> Sun Feb 2 14:24:56 2020 daemon.info dnsmasq[2643]: compile time options: IPv6 GNU-getopt no-DBus no-i18n no-IDN DHCP no-DHCPv6 no-Lua TFTP no-conntrack no-ipset no-auth no-DNSSEC no-ID loop-detect inotify dumpfile
> Sun Feb 2 14:24:56 2020 daemon.info dnsmasq-dhcp[2643]: DHCP, IP range 192.168.1.100 -- 192.168.1.249, lease time 12h
> Sun Feb 2 14:24:56 2020 daemon.info dnsmasq[2643]: using local addresses only for domain test
> Sun Feb 2 14:24:56 2020 daemon.info dnsmasq[2643]: using local addresses only for domain onion
> Sun Feb 2 14:24:56 2020 daemon.info dnsmasq[2643]: using local addresses only for domain localhost
> Sun Feb 2 14:24:56 2020 daemon.info dnsmasq[2643]: using local addresses only for domain local
> Sun Feb 2 14:24:56 2020 daemon.info dnsmasq[2643]: using local addresses only for domain invalid
> Sun Feb 2 14:24:56 2020 daemon.info dnsmasq[2643]: using local addresses only for domain bind
> Sun Feb 2 14:24:56 2020 daemon.info dnsmasq[2643]: using local addresses only for domain lan
> Sun Feb 2 14:24:56 2020 daemon.info dnsmasq[2643]: reading /tmp/resolv.conf.auto
> Sun Feb 2 14:24:56 2020 daemon.info dnsmasq[2643]: using local addresses only for domain test
> Sun Feb 2 14:24:56 2020 daemon.info dnsmasq[2643]: using local addresses only for domain onion
> Sun Feb 2 14:24:56 2020 daemon.info dnsmasq[2643]: using local addresses only for domain localhost
> Sun Feb 2 14:24:56 2020 daemon.info dnsmasq[2643]: using local addresses only for domain local
> Sun Feb 2 14:24:56 2020 daemon.info dnsmasq[2643]: using local addresses only for domain invalid
> Sun Feb 2 14:24:56 2020 daemon.info dnsmasq[2643]: using local addresses only for domain bind
> Sun Feb 2 14:24:56 2020 daemon.info dnsmasq[2643]: using local addresses only for domain lan
> Sun Feb 2 14:24:56 2020 daemon.info dnsmasq[2643]: using nameserver 213.80.98.2#53
> Sun Feb 2 14:24:56 2020 daemon.info dnsmasq[2643]: using nameserver 213.80.101.3#53
> Sun Feb 2 14:24:56 2020 daemon.info dnsmasq[2643]: read /etc/hosts - 4 addresses
> Sun Feb 2 14:24:56 2020 daemon.info dnsmasq[2643]: read /tmp/hosts/dhcp.cfg01411c - 2 addresses
> Sun Feb 2 14:24:56 2020 daemon.info dnsmasq[2643]: read /tmp/hosts/odhcpd - 1 addresses
> Sun Feb 2 14:24:56 2020 daemon.info dnsmasq-dhcp[2643]: read /etc/ethers - 0 addresses
> Sun Feb 2 14:24:56 2020 daemon.info dnsmasq[2643]: read /etc/hosts - 4 addresses
> Sun Feb 2 14:24:56 2020 daemon.info dnsmasq[2643]: read /tmp/hosts/dhcp.cfg01411c - 2 addresses
> Sun Feb 2 14:24:56 2020 daemon.info dnsmasq[2643]: read /tmp/hosts/odhcpd - 1 addresses
> Sun Feb 2 14:24:56 2020 daemon.info dnsmasq-dhcp[2643]: read /etc/ethers - 0 addresses
> Sun Feb 2 14:26:51 2020 daemon.info dnsmasq-dhcp[2643]: DHCPREQUEST(br-lan) 192.168.x.xxx 00:11:xx:xx:xx:xx
> Sun Feb 2 14:26:51 2020 daemon.info dnsmasq-dhcp[2643]: DHCPACK(br-lan) 192.168.x.xxx 00:11:xx:xx:xx:xx diskstation
> Sun Feb 2 14:27:03 2020 daemon.err uhttpd[1886]: luci: accepted login on / for root from 192.168.x.xxx
> Sun Feb 2 14:27:05 2020 daemon.info dnsmasq-dhcp[2643]: DHCPREQUEST(br-lan) 192.168.x.xxx 00:11:xx:xx:xx:xx
> Sun Feb 2 14:27:05 2020 daemon.info dnsmasq-dhcp[2643]: DHCPACK(br-lan) 192.168.x.xxx 00:11:xx:xx:xx:xx diskstation
> Sun Feb 2 14:27:15 2020 authpriv.info dropbear[2779]: Child connection from 192.168.x:xxx:xxxxx
> Sun Feb 2 14:27:19 2020 authpriv.notice dropbear[2779]: Password auth succeeded for 'root' from 192.168.x.xxx:xxxxx
> Sun Feb 2 14:28:04 2020 daemon.notice netifd: Interface 'wan6' is now down
> Sun Feb 2 14:28:04 2020 daemon.notice netifd: Interface 'wan6' is setting up now
> Sun Feb 2 15:23:26 2020 daemon.info dnsmasq-dhcp[2643]: DHCPREQUEST(br-lan) 192.168.x.xxx bc:5f:xx:xx:xx:xx
> Sun Feb 2 15:23:26 2020 daemon.info dnsmasq-dhcp[2643]: DHCPACK(br-lan) 192.168.x.xxx bc:5f:xx:xx:xx:xx Win10