Elasticsearch Optimization
Elasticsearch Optimization Checklist
假设
- hardware 假设
- index/query rate假设
- elasticsearch用户运行elasticsearch
hardware Level
见 [Elasticsearch Hardware Recommendation][9]。
System Level
- adjust vm.swappiness [1][1]
# 这是永久修改
$ echo "vm.swappiness = 1" >> /etc/sysctl.conf
# 这是临时修改,服务器重启后失效
$ sysctl vm.swappiness=1
$ sudo swapoff -a
$ sudo swapon -a
- 1
- 2
- 3
- 4
- 5
- 6
- 7
A swappiness of 1 is better than 0, since on some kernel versions a swappiness of 0 can invoke the OOM-killer.
- Max Open File Descriptors 设置为32k~64k
# max open file descriptors
$ cp /etc/security/limits.conf /etc/security/limits.conf.bak
$ cat /etc/security/limits.conf | grep -v "elasticsearch" > /tmp/system_limits.conf
$ echo "elasticsearch hard nofile 50000" >> /tmp/system_limits.conf
$ echo "elasticsearch soft nofile 50000" >> /tmp/system_limits.conf
$ mv /tmp/system_limits.conf /etc/security/limits.conf
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
-
configure the maximum map count
set it permanently by modifying vm.max_map_count setting in your /etc/sysctl.conf.[2][2]
# virtual Memory
$ cp /etc/sysctl.conf /etc/sysctl.conf.bak
$ cat /etc/sysctl.conf | grep -v "vm.max_map_count" > /tmp/system_sysctl.conf
$ echo "vm.max_map_count=262144" >> /tmp/system_sysctl.conf
$ mv /tmp/system_sysctl.conf /etc/sysctl.conf
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
或者临时修改?[[7]][7]
sysctl -w vm.max_map_count=262144
- 1
查看结果:
$ sysctl -a|grep vm.max_map_count
vm.max_map_count = 262144
- 1
- 2
Application Level
- JVM
见 [Java Virtual Machine][10]。
查看结果:
$ java -version
java version "1.7.0_45"
OpenJDK Runtime Environment (rhel-2.4.3.3.el6-x86_64 u45-b15)
OpenJDK 64-Bit Server VM (build 24.45-b08, mixed mode)
- 1
- 2
- 3
- 4
-
ES_HEAP_SIZE=Xg
Ensure that the min (Xms) and max (Xmx) sizes are the same to prevent the heap from resizing at runtime, a very costly process.
Give Half Your Memory to Lucene
Don’t Cross 32 GB!
enable mlockall (elasticsearch.yml)
bootstrap.mlockall: true
- 1
- discovery (elasticsearch.yml)
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: master_node_list
- 1
- 2
- recovery strategy (elasticsearch.yml)
# if you have 10 nodes
gateway.recover_after_nodes: 8
gateway.expected_nodes: 10
gateway.recover_after_time: 10m
- 1
- 2
- 3
- 4
ES includes several recovery properties which improve both ElasticSearch cluster recovery and restart times. We have shown some sample values below. The value that will work best for you depends on the hardware you have in use, and the best advice we can give is to test, test, and test again.
cluster.routing.allocation.node_concurrent_recoveries:4
- 1
This property is how many shards per node are allowed for recovery at any moment in time. Recovering shards is a very IO-intensive operation, so you should set this value with real caution.
cluster.routing.allocation.node_initial_primaries_recoveries:18
- 1
This controls the number of primary shards initialized concurrently on a single node. The number of parallel stream of data transfer from node to recover shard from peer node is controlled by indices.recovery.concurrent_streams. The value below is setup for the Amazon instance, but if you have your own hardware you might be able to set this value much higher. The property max_bytes_per_sec (as its name suggests) determines how many bytes to transfer per second. This value again need to be configured according to your hardware.
indices.recovery.concurrent_streams: 4
indices.recovery.max_bytes_per_sec: 40mb
- 1
- 2
All of the properties described above get used only when the cluster is restarted.[[5]][5]
- Threadpool Properties Prevent Data Loss[[5]][5]
ElasticSearch node has several thread pools in order to improve how threads are managed within a node. At Loggly, we use bulk request extensively, and we have found that setting the right value for bulk thread pool using threadpool.bulk.queue_size property is crucial in order to avoid data loss or _bulk retries
threadpool.bulk.queue_size: 3000
- 1
This property value is for the bulk request. This tells ES the number of requests that can be queued for execution in the node when there is no thread available to execute a bulk request. This value should be set according to your bulk request load. If your bulk request number goes higher than queue size, you will get a RemoteTransportException as shown below.
Note that in ES the bulk requests queue contains one item per shard, so this number needs to be higher than the number of concurrent bulk requests you want to send if those request contain data for many shards. For example, a single bulk request may contain data for 10 shards, so even if you only send one bulk request, you must have a queue size of at least 10. Setting this value “too high” will chew up heap in your JVM, but does let you hand off queuing to ES, which simplifies your clients.
You either need to keep the property value higher than your accepted load or gracefully handle RemoteTransportException in your client code. If you don’t handle the exception, you will end up losing data. We simulated the exception shown below by sending more than 10 bulk requests with a queue size of 10.
RemoteTransportException[[<Bantam>][inet[/192.168.76.1:9300]][bulk/shard]]; nested: EsRejectedExecutionException[rejected execution (queue capacity 10) on org.elasticsearch.action.support.replication.Trans[email protected]13fe9be];
- 1
- Watch Out for delete_all_indices! [[5]][5]
It’s really important to know that the curl API in ES does not have very good authentication built into it. A simple curl API can cause all the indices to delete themselves and lose all data. This is just one example of a command that could cause a mistaken deletion:
curl -XDELETE ‘http://localhost:9200/*/’
- 1
To avoid this type of grief, you can set the following property:
action.disable_delete_all_indices: true
- 1
This will make sure when above command is given, it will not delete the index and will instead result in an error.
- cluster settings 优化
PUT _cluster/settings
put /_cluster/settings
{
"persistent" : {
"indices.store.throttle.max_bytes_per_sec":"20mb",
"indices.breaker.fielddata.limit":"60%",
"indices.breaker.request.limit":"40%",
"indices.breaker.total.limit":"70%"
}
}
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
上面的都是默认值。如果日志中常出现[your_index_name]... now throttling indexing: numMergesInFlight=6, maxNumMerges=5
并且磁盘IO不高"indices.store.throttle.max_bytes_per_sec"
可以更大;如果日志中经常出现java.lang.OutOfMemoryError
, 可以减小”indices.breaker.fielddata.limit”,
“indices.breaker.request.limit”,
“indices.breaker.total.limit”`的值。
如果fielddata需要占用的JVM heap size超过了限定值,请求会被中断(Q:请求中断后,什么返回?)
,如下日志所示:
[2015-05-27 20:16:44,767][WARN ][indices.breaker ] [10.19.0.84] [FIELDDATA] New used memory 4848575200 [4.5gb] from field [http_request.raw] would be larger t
han configured breaker: 4831838208 [4.5gb], breaking
[2015-05-27 20:16:44,911][WARN ][indices.breaker ] [10.19.0.84] [FIELDDATA] New used memory 4833426184 [4.5gb] from field [host.raw] would be larger than conf
igured breaker: 4831838208 [4.5gb], breaking
[2015-05-27 20:16:44,914][WARN ][indices.breaker ] [10.19.0.84] [FIELDDATA] New used memory 4833425505 [4.5gb] from field [domain.raw] would be larger than co
nfigured breaker: 4831838208 [4.5gb], breaking
- 1
- 2
- 3
- 4
- 5
- 6
TIP: In Fielddata Size, we spoke about adding a limit to the size of fielddata, to ensure that old unused fielddata can be evicted. The relationship between indices.fielddata.cache.size and indices.breaker.fielddata.limit is an important one. If the circuit-breaker limit is lower than the cache size, no data will ever be evicted. In order for it to work properly, the circuit breaker limit must be higher than the cache size.
- disk based allocation strategy[[6]][6]
PUT /_cluster/settings -d '{
"transient" : {
"cluster.routing.allocation.disk.threshold_enabled" : true,
"cluster.routing.allocation.disk.watermark.low" : "85%",
"cluster.routing.allocation.disk.watermark.high" : "90%"
}
}'
- 1
- 2
- 3
- 4
- 5
- 6
- 7
cluster.routing.allocation.disk.watermark.low
controls the low watermark for disk usage. It defaults to 85%, meaning ES will not allocate new shards to nodes once they have more than 85% disk used. It can also be set to an absolute byte value (like 500mb) to prevent ES from allocating shards if less than the configured amount of space is available.
cluster.routing.allocation.disk.watermark.high
controls the high watermark. It defaults to 90%, meaning ES will attempt to relocate shards to another node if the node disk usage rises above 90%. It can also be set to an absolute byte value (similar to the low watermark) to relocate shards once less than the configured amount of space is available on the node.
- index template 优化
利用好不同template之间的order关系
默认所有field都是not_analyzed
默认numeric, date, string(not_analyzed), geo_point类型的field都使用[doc_values][4] 形式的fielddata
Doc values can be enabled for numeric, date, Boolean, binary, and geo-point fields, and for not_analyzed string fields. They do not currently work with analyzed string fields. Doc values are enabled per field in the field mapping, which means that you can combine in-memory fielddata with doc values.
- 安装监控工具
elastic marvel
- 1
验证方法
GET /_nodes/process可以看到
"max_file_descriptors": 64000,
"mlockall": true
- 1
- 2
Appendix A Base Template
PUT _template/base
{
"order": 0,
"template": "*",
"settings": {
"index.refresh_interval": "120s",
"index.number_of_replicas": "1",
"index.number_of_shards": "10",
"index.routing.allocation.total_shards_per_node": "2",
"index.search.slowlog.threshold.query.warn": "10s",
"index.search.slowlog.threshold.query.info": "5s",
"index.search.slowlog.threshold.fetch.warn": "1s",
"index.search.slowlog.threshold.fetch.info": "800ms",
"index.indexing.slowlog.threshold.index.warn": "10s",
"index.indexing.slowlog.threshold.index.info": "5s"
},
"mappings": {
"_default_": {
"dynamic_templates": [
{
"integer field": {
"mapping": {
"doc_values": true,
"type": "integer"
},
"match": "*",
"match_mapping_type": "integer"
}
},
{
"date field": {
"mapping": {
"doc_values": true,
"type": "date"
},
"match": "*",
"match_mapping_type": "date"
}
},
{
"long field": {
"mapping": {
"doc_values": true,
"type": "long"
},
"match": "*",
"match_mapping_type": "long"
}
},
{
"float field": {
"mapping": {
"doc_values": true,
"type": "float"
},
"match": "*",
"match_mapping_type": "float"
}
},
{
"double field": {
"mapping": {
"doc_values": true,
"type": "double"
},
"match": "*",
"match_mapping_type": "double"
}
},
{
"byte field": {
"mapping": {
"doc_values": true,
"type": "byte"
},
"match": "*",
"match_mapping_type": "byte"
}
},
{
"short field": {
"mapping": {
"doc_values": true,
"type": "short"
},
"match": "*",
"match_mapping_type": "short"
}
},
{
"binary field": {
"mapping": {
"doc_values": true,
"type": "binary"
},
"match": "*",
"match_mapping_type": "binary"
}
},
{
"geo_point field": {
"mapping": {
"doc_values": true,
"type": "geo_point"
},
"match": "*",
"match_mapping_type": "geo_point"
}
},
{
"string fields": {
"mapping": {
"index": "not_analyzed",
"omit_norms": true,
"doc_values": true,
"type": "string"
},
"match": "*",
"match_mapping_type": "string"
}
}
],
"_all": {
"enabled": false
}
}
}
}
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
- 108
- 109
- 110
- 111
- 112
- 113
- 114
- 115
- 116
- 117
- 118
- 119
- 120
- 121
- 122
- 123
- 124
- 125
- 126
- 127
- 128
http://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html#_swapping_is_the_death_of_performance “swapping is the death of performance”
http://www.elastic.co/guide/en/elasticsearch/guide/current/_file_descriptors_and_mmap.html “file descriptors and mmap”
http://www.elastic.co/guide/en/elasticsearch/guide/current/_important_configuration_changes.html#_recovery_settings “recovery settings”
http://www.elastic.co/guide/en/elasticsearch/guide/current/doc-values.html#_enabling_doc_values “Enabling Doc Values”
https://www.loggly.com/blog/nine-tips-configuring-elasticsearch-for-high-performance/ “9 Tips on ElasticSearch Configuration for High Performance”
http://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-allocation.html#disk “Disk-based Shard Allocation”
http://*.com/questions/11683850/how-much-memory-could-vm-use-in-linux “how much memory could vm use in linux”
http://www.elastic.co/guide/en/elasticsearch/guide/current/deploy.html “Production Deployment”
http://www.elastic.co/guide/en/elasticsearch/guide/current/hardware.html “Elasticsearch Hardware”
http://www.elastic.co/guide/en/elasticsearch/guide/current/_java_virtual_machine.html “Java Virtual Machine”
https://www.elastic.co/blog/performance-considerations-elasticsearch-indexing “Performance Considerations for Elasticsearch Indexing”
推荐阅读
-
Pytorch Optimization
-
Strange Optimization
-
Elasticsearch Optimization
-
Elasticsearch TermFacet 耗内存问题 博客分类: Elasticsearch elasticsearchfacetOOM
-
如何在Elasticsearch里面使用索引别名 博客分类: ELK elasticsearch
-
理解elasticsearch的parent-child关系 博客分类: ELK elasticsearch
-
Elasticsearch里面的segment合并 博客分类: ELK elasticsearch
-
ElasticSearch里面的路由功能介绍 博客分类: ELK elasticsearch
-
如何优雅的全量读取Elasticsearch索引里面的数据 博客分类: ELK elasticsearch
-
ElasticSearch集群中client节点出现ping不通,不可访问问题 Luceneelasticsearchjavajvmopenfile