Is there an int overflow in nlbwmon?

I have configured nlbwmon (21.02-SNAPSHOT) with option refresh_interval '5m' (rarely refresh to avoid frequently clearing contract counters) and for a while now have been very suspicious of the d/u values that nlbwmon reports. I ran an experiment by downloading >100GB at 30..40MB/s and the counter for that client only moved up by one or two GBs total. Then I switched to option refresh_interval '30s' and repeated the download. This time the download counter is increasing by 1..2GB every minute or so, which approximately matches the download rate.
Anyone else seeing wrong d/u values at high link speeds? Is a 5m refresh not supported any more?

@jow, do you thing this is possible? I am looking at the code right now to see if I can find anything, but it would help if you have some suspicions about where that could be happening that would help me.

UPDATE: 30MB/s * 5 * 60 = 9,000,000,000, which is greater than max int32 == 2,147,483,647.

UPDATE2: With 30s refreshes, 30MB/s * 30 = 900,000,000. It looks like a refresh every 30s might be too long for >~72MB/s speed.

That is almost a gigabit bandwidth...

Many of the tools are optimised for much lower speeds.

I am not even sure if the underlying Linux netlink uses 32 or 64 bit counters, and especially unsure on how the browser's javascript handles the large values for LuCI display.

Sustained gigabit level traffic bandwidth is still a rarity (and your ISP would choke if all customers would do that).

Be it as it may, this information does not help me figure out why that is happening. Maybe someone else with more knowledge can provide some pointers.

Well, you might do some debugging based on the advice part of my previous message, the pondering of the netlink counter size vs. possible impact from LuCI...

... looking at the actual nlbwmon statistics from SSH console shows that the nlbwmon database contains 64bit values quite ok and LuCI shows the same values. So, likely not caused by LuCI.

root@router1:~# nlbw  -c show -g mac
              MAC      Conn.   > Downld. ( > Pkts. )      Upload (   Pkts. )
04:d4:c4:48:2c:9e    75.71 K    13.75 GB (   9.12 M)     9.43 GB (  10.82 M)
e0:c3:77:ae:0a:30    20.70 K   646.48 MB ( 169.56 K)    28.19 MB ( 150.40 K)
ac:57:75:56:c1:e0    13.68 K   168.59 MB ( 115.80 K)    25.02 MB (  57.01 K)

So, the problem is likely not at the nlbwmon database itself or in the LuCI display logic, but lies in the data collection part. Either netlink has 32bit counters and you miss the >4GB change values (or 2 GB?) if you poll them too infrequently, or nlbwmon handles the data wrongly.

Then, looking at the nlbwmon source, it seems to handle the counters as 64bit, which seems ok. That would turn my eyes toward the underlying netlink library as the possible culprit.

But I will leave the further debugging for somebody else with more knowledge.

2 Likes

Looks like increasing the refresh rate to 30s only fixes the problem temporarily and it eventually just stops counting. Seems to coincide with the errors below and nlbwmon restart resumed the counting. The ntelink API is very complicate at first sight.

daemon.err nlbwmon[924]: Netlink receive failure: Out of memory
daemon.err nlbwmon[924]: Unable to dump conntrack: No buffer space available

UPDATE: Trying option netlink_buffer_size '67108864' for now to see if that helps.

That error message gives some interesting forum hits.
https://forum.openwrt.org/search?q=%20Netlink%20receive%20failure%20

Increasing buffer size might help, yes. (But that is pretty much the only solution idea so far)

Yeah, I found those. Together with 30s refresh interval is working fine so far.

Also make sure to leave offloading (hardware- and software flow-offloading, the same probably also goes for NSS based offloading) disabled, those bypass netfilter and you won't get real figures.

2 Likes

Offloading is not selected for sure and there is no NSS: it is a wrt3200acm acting as a wired border router. The link is symmetrical, but far from a gigabit so I still need SQM running just in case, so no offloading.

Increasing netlink_buffer_size to 64 & 128MB did not lead to the increased memory usage on the router. Is that normal?

Ended up making the following changes and it all seems to be working fine now:

# /etc/config/nlbwmon
        option refresh_interval '10s'
        option netlink_buffer_size '67108864'
# cat /etc/sysctl.d/99-user.conf 
net.core.wmem_max=67108864
net.core.rmem_max=67108864

The overflow or some other issue might still exist for very high speed links, but the changes above worked around whatever issues I experienced before.

3 Likes
1 Like

Thx, but do you want to add a note that the kernel actually allocates double that amount? That could come handy on routers with smaller than 512MB RAM.

https://markmail.org/message/pc6lvhteba3jnpft

The documentation states that TCP allocates the duplicate amount, the netlink socket does not use TCP.

1 Like

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.