I have a router running LEDE OS (v17.01) with Kernel Version v4.4.61. The router is connected directly by 1Gbit Ethernet Cable to a laptop. To get the maximum throughput, I tune TCP parameters on both sides:
sysctl -w net.core.rmem_max=33554432 sysctl -w net.core.wmem_max=33554432 sysctl -w net.ipv4.tcp_rmem='4096 87380 33554432' sysctl -w net.ipv4.tcp_wmem='4096 16384 33554432' ifconfig eth0 txqueuelen 100000
I run iperf in a server mode on the router and starts iperf in client mode on the laptop. I can reach around 950 Mbps and in the same time I can see in the output of the iperf that I have some retries which is an indication of packet drops. However, since the RTT is very small it does not matter a lot since TCP can recover pretty fast. Now I add virtual delay using
netem tool on the outgoing traffic from my laptop towards the router. In the same time, I make sure that the buffer of the netem is very large around 100000 packets. I run iperf session and I see that I cannot reach more than 75 Mbps for a delay of 140ms. The output of iperf reports very small window size and a lot of packet drops from time to time (High retries).
So my question; what would be the source of packet drops in my setup? How can I find the source of bottleneck? As what I mentioned above I increased both TCP buffer sizes and the Ethernet interface buffer sizes on both sides.
I looked into the statistics of eth0 interface in
/sys/class/net/eth0/statistics/ but I could not find anything strange. In addition, netstat tool in LEDE does not give so much information about TCP sockets.
Note: I repeated the previous experiment again and replaced the router with a laptop and I was able to reach 900Mbps for high delay. So the bottleneck comes from the router side.