Max active connections (nf_conntrack_max) artificially fixed to 16384?

I have a PC Engines apu4d4 which has 4 GB of RAM. When setting up OpenWrt I noticed LuCI shows an active connections meter with a max of 16384. The documentation here leads me to believe that this number is smaller than it should be.

So, I looked at the OpenWrt source and it appears that nf_conntrack_max is hard coded to 16384 here.

The only reason that I could find was an issue from 7 years ago when someone complained that default was too small and the associated commit where it was initially hard coded.

Is there any modern reason why this is still set to 16384 or am I missing something here? If not, can the sysctl setting please be removed so that Linux can determine the best value?

Here are the links to the issue and commit mentioned. I could not include them in the original post because "new users can only put 2 links in a post".

@Tredwell, welcome to the community!

If the doc says:

buckets * 4

256 * 4 == 1024

16384 > 1024

  • Please explain how you're lead to believe that 16384 is too low from the linked documentation?
  • What value are you suggesting?
  • If your device only had 4 MB of RAM, 16384 is very high
  • 4 GB of RAM in a [consumer-grade] router is not common
  • The issue in the ticket you linked isn't quite as you describe it or make it to be - the creator's issue was that the value was quite low at 3870 and the connection timeouts were 5 days!
  • Yes, the commit shows the setting of 16384

There are many sites that note a single connection track take about 316 bytes...simply divide that by 4 MB of RAM.

https://johnleach.co.uk/posts/2009/06/17/netfilter-conntrack-memory-usage/

Wouldn't the prudent solution be for you to manually edit your own router config, instead of requesting a hard setting change for all users running OpenWrt?

OpenWrt appears to be tuned for a device with ~64 MB of RAM, based on the documentation you provided yourself.

If this can be configured at "sysctl.conf", people with special requirements and big machines can tune the limit to meet their needs, no?

1 Like

Hi everyone and thanks for the welcome,

I think the title of my post might be misleading people. When I said "limited to 16384" I really meant "fixed to 16384". My problem is that the default setting in the upstream Linux kernel is to calculate the value based off the RAM in a system, but OpenWrt forces it to always be 16384 regardless if it is too little or too much for a given system. Lleachii showed that 16384 is appropriate for 64 MB of RAM. However, for 4 MB it would be too much and for 4 GB it would be too little. I am not requesting a hard change for all users because that is what OpenWrt is currently doing now!

To answer your questions:

On my system with 4 GB I should and do receive 65536 when using the Linux default. (16384 * 4 = 65536)

I am suggesting no value such that it uses the upstream default which means calculating based on RAM in the system.

All this means practically is deleting this line:

net.netfilter.nf_conntrack_max=16384

Exactly! That is why OpenWrt's current setting of 16384 is bad and should be changed to the Linux default.

Yes, but that does not mean 16384 should be used by everyone; it hurts anyone who is not using a system with 64 MB of RAM.

I was trying to find the rational for why 16384 was chosen, not describe the full issue.

No, because then everyone would not benefit from it.

Yes, they can. So can people with little machines. But why make people have to change this setting when the Linux default configures it automatically?

3 Likes

Indeed, and this also applies to other arbitrary limits. Case in point…

/etc/sysctl.d/11-nf-conntrack.conf:

net.netfilter.nf_conntrack_max=16384
net.netfilter.nf_conntrack_tcp_timeout_established=7440
net.netfilter.nf_conntrack_udp_timeout=60
net.netfilter.nf_conntrack_udp_timeout_stream=180

/etc/sysctl.d/10-default.conf:

net.ipv4.igmp_max_memberships=100
net.ipv4.tcp_fin_timeout=30
net.ipv4.tcp_keepalive_time=120

Don't do it. The kernel knows better.

1 Like

You showed us a ticket where OpenWrt set it too low at 3870 and a 5 day timeout...perhaps you're referencing another issue?

How would people with RAM less than you benefit from an arbitrarily high setting?

:+1:

Fixed!

That is the correct ticket. In the ticket, nf_conntrack_max defaulted to 3870 on a system that had about 16 MB of RAM. The creator of the ticket felt that was "a little bit small" and as a result OpenWrt set nf_conntrack_max to 16384 for everyone. However, that was 7 years ago and OpenWrt now recommends that at least 128 MB RAM routers be used!

Thanks!

2 Likes

Does it actually matter?
As far as I know the memory for the hash table is allocated when needed.
That means a high(er) value doesn't hurt low memory devices.
But maybe it will consume (almost?) all memory at some conditions.
But if that's the case, then there are different problems...

  • Why are there so many connections?
  • Is the device (hardware) maybe not suited for the workload?

Some sys admins set those values (buckets/max connections) to a (extreme) high value and just don't bother with it anymore.

1 Like

In the event that nf_conntrack_count is greater than nf_conntrack_max packets will start getting dropped. I am not sure what will happen in the event that you run out of memory, but I don't think it is acceptable for a router to crash, lock up, or have other programs running stop working due to an out of memory situation.

Ummm...that's exactly what happens...

Indeed.

This means ~64 MB of RAM can be dedicated to conntrack.

Well, either connections start to drop or other bad things happen :laughing:
As I said if such things happen further investigations are needed and
proper countermeasures must be applied.
Either upgrade the machine with more ram (conntrack can not be swapped) and increase the conntrack limit. Or enforce connection limits on a device basis. Or something else.
I agree, 16384 is too much for a device with 4 MB Ram.
But in a normal home network, a limit of 16384 connection will probably never be exhausted.
Most devices nowadays come with "plenty" of ram, so 16384 should be more than sufficient.
So Yes, maybe it is better to let the kernel decide what is best.
But on most devices it shouldn't cause problems.

//edit
On my system with 512 MB, the defaults are:
nf_conntrack_buckets = 8192
nf_conntrack_expect_max = 128
nf_conntrack_max = 32768

According to:
https://www.kernel.org/doc/Documentation/networking/nf_conntrack-sysctl.txt
Those are values for a system with 128 MB Ram.
And nf_conntrack_expect_max should be 32.

Someone knows how to properly calculate the memory usage?
I found some sites explained how to do this, but they are quite old...
For example how to get the size of an entry in the conntrack table?
How to query this value at run time?
Infos vary from 192 to 352 bytes.
For example, I found this formula:

conntrack_max = (desired_ram_usage * 1024 * 1024) / (nf_conntrack_struct_size + (size_of_pointer * 2) / KERNEL_HASHSIZE_TO_CONNTRACK_MAX_RATIO)
size_of_pointer = 4 (I guess, also for ARM?)
KERNEL_HASHSIZE_TO_CONNTRACK_MAX_RATIO = 8 (?)
Assuming a max struct size of 352.
Around ~ 11,25 Mbyte of memory usage for 32768 connections?

If I want to use 75% MB of Ram for conntrack:
conntrack_max = 512 * 0.75 * 1024 * 1024 / 353
= ~ 1140660
And buckets is actually 142582 and not 285165 because
KERNEL_HASHSIZE_TO_CONNTRACK_MAX_RATIO is 8 and not 4?

Was this ever resolved?
On GitHub it seems that the master branch still carries this value.
Digging through history, it seems the value was introduced in 2009.
Am very curious to know the motivations behind the current value.

@tony-adams, welcome to the community!

Resolved what?

Did you actually read the thread?

You actually explained on the Git page why the value has to be set.

I have read the thread and kernel documentation.

Kernel doc says:

nf_conntrack_max - INTEGER
Size of connection tracking table. Default value is
nf_conntrack_buckets value * 4.

and

nf_conntrack_buckets - INTEGER
Size of hash table. If not specified as parameter during module
loading, the default size is calculated by dividing total memory
by 16384 to determine the number of buckets but the hash table will
never have fewer than 32 and limited to 16384 buckets. For systems
with more than 4GB of memory it will be 65536 buckets.
This sysctl is only writeable in the initial net namespace.

OpenWRT is explicitly setting the value rather than allowing the kernel to calculate based on the documented algorithm. Nothing I've read in the thread or git material provides a reason for fixing this value for all users rather than allowing kernel code to calculate the value.

@lleachii, your reply seems to suggest that I'm missing something; my apologies if I'm being thick here, but I just don't see a 'why' in any of the material I've read.

No problem.

See Post No. 2. I understand it may not satisfy you (nor may you agree with it); but it does clearly explain the issue a user presented with leaving it unset; and why it was resolved in the manner it was (i.e. setting the value at 16384). Perhaps you should bring this up to the developers.

This value was fixed 7 years ago because the default auto-calculated value was too small for daily use in a small memory router like 4MB.

But today the minimum memory requirement for OpenWrt is 32MB. fixing this value at 16384 is no longer a performance extension, but a performance limitation.

What should be considered now is to file a bug to the developer and ask to remove this fixed value, or increase it.

Its possible to edit the value here:

/etc/sysctl.d/11-nf-conntrack.conf

would be nice if this was possible though LuCI

As mentioned in the header of the file, changes will be lost after upgrade and it's better to use /etc/sysctl.conf

2 Likes

At least one person read the file :wink:

Another method is create and use local.conf in /etc/sysctl.d/

This is preferred on other linux distros when system upgrades revert sysctl.conf to maintainer version and lead to merge conflicts / require user intervention vs unattended upgrades.