I have a PC Engines apu4d4 which has 4 GB of RAM. When setting up OpenWrt I noticed LuCI shows an active connections meter with a max of 16384. The documentation here leads me to believe that this number is smaller than it should be.
So, I looked at the OpenWrt source and it appears that nf_conntrack_max is hard coded to 16384 here.
The only reason that I could find was an issue from 7 years ago when someone complained that default was too small and the associated commit where it was initially hard coded.
Is there any modern reason why this is still set to 16384 or am I missing something here? If not, can the sysctl setting please be removed so that Linux can determine the best value?
I think the title of my post might be misleading people. When I said "limited to 16384" I really meant "fixed to 16384". My problem is that the default setting in the upstream Linux kernel is to calculate the value based off the RAM in a system, but OpenWrt forces it to always be 16384 regardless if it is too little or too much for a given system. Lleachii showed that 16384 is appropriate for 64 MB of RAM. However, for 4 MB it would be too much and for 4 GB it would be too little. I am not requesting a hard change for all users because that is what OpenWrt is currently doing now!
To answer your questions:
On my system with 4 GB I should and do receive 65536 when using the Linux default. (16384 * 4 = 65536)
I am suggesting no value such that it uses the upstream default which means calculating based on RAM in the system.
That is the correct ticket. In the ticket, nf_conntrack_max defaulted to 3870 on a system that had about 16 MB of RAM. The creator of the ticket felt that was "a little bit small" and as a result OpenWrt set nf_conntrack_max to 16384 for everyone. However, that was 7 years ago and OpenWrt now recommends that at least 128 MB RAM routers be used!
Does it actually matter?
As far as I know the memory for the hash table is allocated when needed.
That means a high(er) value doesn't hurt low memory devices.
But maybe it will consume (almost?) all memory at some conditions.
But if that's the case, then there are different problems...
Why are there so many connections?
Is the device (hardware) maybe not suited for the workload?
Some sys admins set those values (buckets/max connections) to a (extreme) high value and just don't bother with it anymore.
In the event that nf_conntrack_count is greater than nf_conntrack_max packets will start getting dropped. I am not sure what will happen in the event that you run out of memory, but I don't think it is acceptable for a router to crash, lock up, or have other programs running stop working due to an out of memory situation.
Well, either connections start to drop or other bad things happen
As I said if such things happen further investigations are needed and
proper countermeasures must be applied.
Either upgrade the machine with more ram (conntrack can not be swapped) and increase the conntrack limit. Or enforce connection limits on a device basis. Or something else.
I agree, 16384 is too much for a device with 4 MB Ram.
But in a normal home network, a limit of 16384 connection will probably never be exhausted.
Most devices nowadays come with "plenty" of ram, so 16384 should be more than sufficient.
So Yes, maybe it is better to let the kernel decide what is best.
But on most devices it shouldn't cause problems.
On my system with 512 MB, the defaults are:
nf_conntrack_buckets = 8192
nf_conntrack_expect_max = 128
nf_conntrack_max = 32768
Someone knows how to properly calculate the memory usage?
I found some sites explained how to do this, but they are quite old...
For example how to get the size of an entry in the conntrack table?
How to query this value at run time?
Infos vary from 192 to 352 bytes.
For example, I found this formula:
conntrack_max = (desired_ram_usage * 1024 * 1024) / (nf_conntrack_struct_size + (size_of_pointer * 2) / KERNEL_HASHSIZE_TO_CONNTRACK_MAX_RATIO)
size_of_pointer = 4 (I guess, also for ARM?)
KERNEL_HASHSIZE_TO_CONNTRACK_MAX_RATIO = 8 (?)
Assuming a max struct size of 352.
Around ~ 11,25 Mbyte of memory usage for 32768 connections?
If I want to use 75% MB of Ram for conntrack: conntrack_max = 512 * 0.75 * 1024 * 1024 / 353
= ~ 1140660
And buckets is actually 142582 and not 285165 because
KERNEL_HASHSIZE_TO_CONNTRACK_MAX_RATIO is 8 and not 4?
nf_conntrack_max - INTEGER
Size of connection tracking table. Default value is
nf_conntrack_buckets value * 4.
nf_conntrack_buckets - INTEGER
Size of hash table. If not specified as parameter during module
loading, the default size is calculated by dividing total memory
by 16384 to determine the number of buckets but the hash table will
never have fewer than 32 and limited to 16384 buckets. For systems
with more than 4GB of memory it will be 65536 buckets.
This sysctl is only writeable in the initial net namespace.
OpenWRT is explicitly setting the value rather than allowing the kernel to calculate based on the documented algorithm. Nothing I've read in the thread or git material provides a reason for fixing this value for all users rather than allowing kernel code to calculate the value.
@lleachii, your reply seems to suggest that I'm missing something; my apologies if I'm being thick here, but I just don't see a 'why' in any of the material I've read.
See Post No. 2. I understand it may not satisfy you (nor may you agree with it); but it does clearly explain the issue a user presented with leaving it unset; and why it was resolved in the manner it was (i.e. setting the value at 16384). Perhaps you should bring this up to the developers.