storj.io/minio@v0.0.0-20230509071714-0cbc90f649b1/docs/zh_CN/deployment/kernel-tuning/README.md (about) 1 # Linux服务器上MinIO生产环境的内核调优 [](https://slack.min.io) [](https://hub.docker.com/r/minio/minio/) 2 3 这儿有一份针对MinIO服务器内核调优的建议, 你可以拷贝这个[脚本](https://github.com/minio/minio/blob/master/docs/deployment/kernel-tuning/sysctl.sh)到你的服务器上使用。 4 5 > 注意: 这是Linux服务器上的通用建议,不过在使用前也要非常小心。这些设置不是强制性的,而且也不能解决硬件的问题,所以不要使用它们提高 6 > 性能掩盖硬件本身的问题。在任何情况下,都应该先进行硬件基准测试,达到预期结果后才真正的执行优化。 7 8 ``` 9 #!/bin/bash 10 11 cat > sysctl.conf <<EOF 12 # maximum number of open files/file descriptors 13 fs.file-max = 4194303 14 15 # use as little swap space as possible 16 vm.swappiness = 1 17 18 # prioritize application RAM against disk/swap cache 19 vm.vfs_cache_pressure = 50 20 21 # minimum free memory 22 vm.min_free_kbytes = 1000000 23 24 # follow mellanox best practices https://community.mellanox.com/s/article/linux-sysctl-tuning 25 # the following changes are recommended for improving IPv4 traffic performance by Mellanox 26 27 # disable the TCP timestamps option for better CPU utilization 28 net.ipv4.tcp_timestamps = 0 29 30 # enable the TCP selective acks option for better throughput 31 net.ipv4.tcp_sack = 1 32 33 # increase the maximum length of processor input queues 34 net.core.netdev_max_backlog = 250000 35 36 # increase the TCP maximum and default buffer sizes using setsockopt() 37 net.core.rmem_max = 4194304 38 net.core.wmem_max = 4194304 39 net.core.rmem_default = 4194304 40 net.core.wmem_default = 4194304 41 net.core.optmem_max = 4194304 42 43 # increase memory thresholds to prevent packet dropping: 44 net.ipv4.tcp_rmem = "4096 87380 4194304" 45 net.ipv4.tcp_wmem = "4096 65536 4194304" 46 47 # enable low latency mode for TCP: 48 net.ipv4.tcp_low_latency = 1 49 50 # the following variable is used to tell the kernel how much of the socket buffer 51 # space should be used for TCP window size, and how much to save for an application 52 # buffer. A value of 1 means the socket buffer will be divided evenly between. 53 # TCP windows size and application. 54 net.ipv4.tcp_adv_win_scale = 1 55 56 # maximum number of incoming connections 57 net.core.somaxconn = 65535 58 59 # maximum number of packets queued 60 net.core.netdev_max_backlog = 10000 61 62 # queue length of completely established sockets waiting for accept 63 net.ipv4.tcp_max_syn_backlog = 4096 64 65 # time to wait (seconds) for FIN packet 66 net.ipv4.tcp_fin_timeout = 15 67 68 # disable icmp send redirects 69 net.ipv4.conf.all.send_redirects = 0 70 71 # disable icmp accept redirect 72 net.ipv4.conf.all.accept_redirects = 0 73 74 # drop packets with LSR or SSR 75 net.ipv4.conf.all.accept_source_route = 0 76 77 # MTU discovery, only enable when ICMP blackhole detected 78 net.ipv4.tcp_mtu_probing = 1 79 80 EOF 81 82 echo "Enabling system level tuning params" 83 sysctl --quiet --load sysctl.conf && rm -f sysctl.conf 84 85 # `Transparent Hugepage Support`*: This is a Linux kernel feature intended to improve 86 # performance by making more efficient use of processor’s memory-mapping hardware. 87 # But this may cause https://blogs.oracle.com/linux/performance-issues-with-transparent-huge-pages-thp 88 # for non-optimized applications. As most Linux distributions set it to `enabled=always` by default, 89 # we recommend changing this to `enabled=madvise`. This will allow applications optimized 90 # for transparent hugepages to obtain the performance benefits, while preventing the 91 # associated problems otherwise. Also, set `transparent_hugepage=madvise` on your kernel 92 # command line (e.g. in /etc/default/grub) to persistently set this value. 93 94 echo "Enabling THP madvise" 95 echo madvise | sudo tee /sys/kernel/mm/transparent_hugepage/enabled 96 ```