Optimizing Linux Network Performance: Kernel Parameters and Settings
The performance of the network can be critical when replicating data. The information transferred over the network contains the full content of the THL in addition to a small protocol overhead. Improving your network performance can have a significant impact on the overall performance of the replication process.
The following network parameters should be configured within your /etc/sysctl.conf and can safely applied to all the hosts within your cluster deployments:
# Increase size of file handles and inode cache
fs.file-max = 2097152
# tells the kernel how many TCP sockets that are not attached to any
# user file handle to maintain. In case this number is exceeded,
# orphaned connections are immediately reset and a warning is printed.
net.ipv4.tcp_max_orphans = 60000
# Do not cache metrics on closing connections
net.ipv4.tcp_no_metrics_save = 1
# Turn on window scaling which can enlarge the transfer window:
net.ipv4.tcp_window_scaling = 1
# Enable timestamps as defined in RFC1323:
net.ipv4.tcp_timestamps = 1
# Enable select acknowledgments:
net.ipv4.tcp_sack = 1
# Maximum number of remembered connection requests, which did not yet
# receive an acknowledgment from connecting client.
net.ipv4.tcp_max_syn_backlog = 10240
# recommended default congestion control is htcp
net.ipv4.tcp_congestion_control=htcp
# recommended for hosts with jumbo frames enabled
net.ipv4.tcp_mtu_probing=1
# Number of times SYNACKs for passive TCP connection.
net.ipv4.tcp_synack_retries = 2
# Allowed local port range
net.ipv4.ip_local_port_range = 1024 65535
# Protect Against TCP Time-Wait
net.ipv4.tcp_rfc1337 = 1
# Decrease the time default value for tcp_fin_timeout connection
net.ipv4.tcp_fin_timeout = 15
# Increase number of incoming connections
# somaxconn defines the number of request_sock structures
# allocated per each listen call. The
# queue is persistent through the life of the listen socket.
net.core.somaxconn = 1024
# Increase number of incoming connections backlog queue
# Sets the maximum number of packets, queued on the INPUT
# side, when the interface receives packets faster than
# kernel can process them.
net.core.netdev_max_backlog = 65536
# Increase the maximum amount of option memory buffers
net.core.optmem_max = 25165824
# Increase the maximum total buffer-space allocatable
# This is measured in units of pages (4096 bytes)
net.ipv4.tcp_mem = 65536 131072 262144
net.ipv4.udp_mem = 65536 131072 262144
### Set the max OS send buffer size (wmem) and receive buffer
# size (rmem) to 12 MB for queues on all protocols. In other
# words set the amount of memory that is allocated for each
# TCP socket when it is opened or created while transferring files
# Default Socket Receive Buffer
net.core.rmem_default = 25165824
# Maximum Socket Receive Buffer
net.core.rmem_max = 25165824
# Increase the read-buffer space allocatable (minimum size,
# initial size, and maximum size in bytes)
net.ipv4.tcp_rmem = 20480 12582912 25165824
net.ipv4.udp_rmem_min = 16384
# Default Socket Send Buffer
net.core.wmem_default = 25165824
# Maximum Socket Send Buffer
net.core.wmem_max = 25165824
# Increase the write-buffer-space allocatable
net.ipv4.tcp_wmem = 20480 12582912 25165824
net.ipv4.udp_wmem_min = 16384
# Increase the tcp-time-wait buckets pool size to prevent simple DOS attacks
net.ipv4.tcp_max_tw_buckets = 1440000
# net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1
Please note that changing kernel parameters can have significant effects on system behavior, and it's essential to thoroughly test and monitor the impact of these changes to ensure they align with the desired network performance improvements. It is also advisable to back up the original configuration before making any modifications. Additionally, some parameters might not be relevant or appropriate for all systems, so consider their suitability based on your specific use case and system requirements.