Setting the tcp_mem parameter

Hi!
Tell me who is in the subject? If I enter the
cat /proc/sys/net/ipv4/tcp_mem
command via PuTTY, then I have the following values
642 859 1284
Does it make sense to increase these values on archer c20 v4 with 64 megabits of RAM.

I saw these values on the site
96552 128739 193104
The question is the following. Will I have enough RAM if I set these values? Or which ones are better?

Is this by any chance another torrent question?

No. Learning the network stack.
So everything seems to be clear, except for this parameter.
Fine-tuning network parameters, I want to configure for myself better.

Not sure how much this matters for a routing work-load, I would expect this to be more relevant for machines that are acting as endpoints of TCP connections. So what work load do you run on your router?

It's hard to say what load, I can say that I have already changed.

net.ipv4.conf.all.accept_redirects=0
net.ipv4.conf.all.send_redirects=0
net.ipv4.conf.all.secure_redirects=0
net.ipv4.tcp_max_syn_backlog=4096
net.ipv4.tcp_max_orphans=65536
net.ipv4.tcp_keepalive_time=60
net.ipv4.tcp_keepalive_intvl=15
net.ipv4.tcp_keepalive_probes=5
net.core.somaxconn=15000
net.ipv4.tcp_timestamps=0
net.ipv4.tcp_no_metrics_save=1
net.ipv4.tcp_rfc1337=1
net.ipv4.ip_local_port_range="1024 65535"
net.netfilter.nf_conntrack_max=16384
net.ipv4.tcp_synack_retries=3
net.netfilter.nf_conntrack_generic_timeout=60
net.netfilter.nf_conntrack_tcp_timeout_close_wait=30
net.netfilter.nf_conntrack_tcp_timeout_established=600
net.netfilter.nf_conntrack_tcp_timeout_fin_wait=30
net.netfilter.nf_conntrack_tcp_timeout_syn_recv=30
net.netfilter.nf_conntrack_tcp_timeout_syn_sent=60
net.netfilter.nf_conntrack_tcp_timeout_time_wait=60
net.netfilter.nf_conntrack_udp_timeout_stream=60

What changed? It became better to work.
For example, in the evening, when the provider has a large load, the ping holds on the spot in the online game.

Out of curiosity, which of these changes do you credit for improving ICMP performance, gien that the bulk affect TCP or UDP?
And how did you measure the improvements?

1 Like

icmp if you notice i disabled it altogether, also disabled timestamps for security and i just started learning linux networking stack.
I know how to configure the windows network stack, and on this basis I decided to configure the router

This is IMHO security theater, timestamps do reveal very little (they can be used in fingerprinting, and in estimatimg a systems uptime) and actually are required for protecting against some TCP attacks, also the help TCP performance. But that is mostly relevant for connections actually terminating TCP connections, not that much for a router that does not look at TCP timestamps for packets routed.
(According to e.g. https://man7.org/linux/man-pages/man7/tcp.7.html Linux uses a random offset for each connection so the uptime does not leak anymore (in an easy to detect remotely fashion), so the attack surface rfc1323 timestamps generate is even smaller now than it used to be when the security recommendations to disable this where originally written).

But again and end system and a router typically perform different tasks and what helps and end system might do little on a router.

My personal approach to such tuning is to always change things one by one and only keep configuration changes from default that noticeably improve the parameter I am trying to optimize (I only try combinations of parameters that are individually ineffective if guided by theory or found believable recommendations to do so).

I tune by trial and error, read about the parameter, study it, look for weaknesses.
For example, timestamps take up extra 12 bytes of space.
Where the packet could pass in its entirety and, due to time stamps, be fragmented, which will lead to a buffer overflow, and this is a lag in a network game

yes, for a full MTU packet that is 100*12/1500 = 0.8% but keep in mind that at that point you already have at least 20+20 = 40 bytes of overhead (for IPv4 and TCP, plus any L2 overhead)...

No, that is not how this works, TCP is a stream, not a packet oriented protocol, so if you use rfc1323 timestamps the effective payload per IP packet gets 12 byte smaller at the sender. So there is no fragmentation happening att the IP level.
Also IP-fragmentation is orthogonal to "buffer overflows". Side-note "buffer overflows" is a term typically reserved for a type of bug where the system writes to memory beyond where a writable buffer ends thereby changing memory that should not have been changed, that bug can happen everywhere, including the networking subsystem, but is a bug not a consequence of rfc1323 timestamps. Most network games actually do not use TCP but UDP (anything timing critical is not that well served with TCP as TCP will wait for packet retransmission of lost packets before delivering to the application which can lead tp bad latency spikes).

1 Like

I play on a server where the ping is 180, in the evening it rose to 300-400, which made the whole gameplay very difficult, so I set myself the task of doing better, which is why I began to study all these network parameters.
I specifically moved to a game server with a high ping, so it's easier for me to conduct tests.
Playing an online game and testing the network stack at the same time.
You can say visual testing.
I listen to other people's opinions, but I don't take it on faith, I have to check it myself)

Unless you tried already, maybe give sqm-scripts/luci-app-sqm a try, with cake as qdisc and layer_cake.qos as qos script.
See:

and

for configuration instructions and background information.
In case of questions, just create a new thread here in the forum with either sqm or cake in the title...

On a router, better queue management and a FQ scheduler can do wonders for gaming traffic (unless on a very slow link).

I played with SQM for a long time, as a result, I deleted it.
I decided that it was better to play with a network path.
There are a lot of network parameters, there is something to do in their free time))
In SQM, I have not found anything useful for myself, perhaps it helped someone, but I don't

Well if SQM did not help (and your router was not CPU limited) I consider it somewhat unlikely that modifying sysctl values on the router is going to result in robust and reliable improvements in end to end networking performance, but that is my subjective position.

Sure, if you enjoy such experimentation, go for it. Not sure it will help much, but it also should not hurt much either :wink:

2 Likes

Added this too

net.ipv4.tcp_sack=0
net.ipv4.tcp_dsack=0
net.ipv4.tcp_fack=0
net.ipv4.tcp_retries2=6
net.ipv4.tcp_tw_recycle=1
net.ipv4.icmp_echo_ignore_all=1

began to work well.
some parameters are still in the experimental stage, they are not in this list
upper three parameters disconnects Nagle's algorithm

Well, again what amount of TCP connections do you initiate/terminate on your router? Because these settings will have zero effect on your end-points' TCP connections.
The only off-chance I see is if your router is border-line overloaded and every bit of processing avoided there directly translates into less queueing delay and/or jitter.

1 Like