欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

ES出现read-only的解决办法

程序员文章站 2022-04-16 08:52:01
...

环境

Elasticsearch 6.5.1

问题

es入库出现如下异常:

Caused by: org.elasticsearch.hadoop.EsHadoopException: Could not write all entries for bulk operation [1000/1000]. Error sample (first [5] error messages):
	org.elasticsearch.hadoop.rest.EsHadoopRemoteException: cluster_block_exception: blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];
	{"index":{}}
{"clients":"221.178.126.22","visitCount":2,"ip":"103.74.193.128","etime":1619329560836,"host":"www.whois.gd","stime":1619329560836,"clientCount":1}

	org.elasticsearch.hadoop.rest.EsHadoopRemoteException: cluster_block_exception: blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];
	{"index":{}}
{"clients":"123.145.32.108","visitCount":1,"ip":"47.91.24.26","etime":1619346287107,"host":"www.01zy.cn","stime":1619346287107,"clientCount":1}

	org.elasticsearch.hadoop.rest.EsHadoopRemoteException: cluster_block_exception: blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];
	{"index":{}}
{"clients":"125.82.28.37","visitCount":1,"ip":"200.0.81.81","etime":1619281692173,"host":"legislacao.anatel.gov.br","stime":1619281692173,"clientCount":1}

	org.elasticsearch.hadoop.rest.EsHadoopRemoteException: cluster_block_exception: blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];
	{"index":{}}
{"clients":"221.178.126.22","visitCount":2,"ip":"45.136.13.65","etime":1619299495362,"host":"www.veestyle.cn","stime":1619299249631,"clientCount":1}

	org.elasticsearch.hadoop.rest.EsHadoopRemoteException: cluster_block_exception: blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];
	{"index":{}}
{"clients":"14.111.48.11","visitCount":1,"ip":"156.250.150.139","etime":1619340198188,"host":"www.747218.com","stime":1619340198188,"clientCount":1}

Bailing out...
	at org.elasticsearch.hadoop.rest.bulk.BulkProcessor.flush(BulkProcessor.java:519) ~[elasticsearch-spark-20_2.11-6.5.1.jar:6.5.1]
	at org.elasticsearch.hadoop.rest.bulk.BulkProcessor.add(BulkProcessor.java:127) ~[elasticsearch-spark-20_2.11-6.5.1.jar:6.5.1]
	at org.elasticsearch.hadoop.rest.RestRepository.doWriteToIndex(RestRepository.java:192) ~[elasticsearch-spark-20_2.11-6.5.1.jar:6.5.1]
	at org.elasticsearch.hadoop.rest.RestRepository.writeToIndex(RestRepository.java:172) ~[elasticsearch-spark-20_2.11-6.5.1.jar:6.5.1]
	at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:67) ~[elasticsearch-spark-20_2.11-6.5.1.jar:6.5.1]
	at org.elasticsearch.spark.rdd.EsSpark$$anonfun$doSaveToEs$1.apply(EsSpark.scala:107) ~[elasticsearch-spark-20_2.11-6.5.1.jar:6.5.1]
	at org.elasticsearch.spark.rdd.EsSpark$$anonfun$doSaveToEs$1.apply(EsSpark.scala:107) ~[elasticsearch-spark-20_2.11-6.5.1.jar:6.5.1]
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) ~[spark-core_2.11-2.3.1.jar:2.3.1]
	at org.apache.spark.scheduler.Task.run(Task.scala:109) ~[spark-core_2.11-2.3.1.jar:2.3.1]
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345) ~[spark-core_2.11-2.3.1.jar:2.3.1]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_65]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_65]
	at java.lang.Thread.run(Thread.java:745) ~[?:1.8.0_65]

解决办法

这里的原因多半是因为部分服务器节点空间不够,造成数据不能正常保存,解决办法就是扩容或者删除部分数据。

下面的语句可以临时解决问题:
curl -XPUT -H “Content-Type: application/json” http://${ip}:9200/_all/_settings?pretty -d ‘{“index.blocks.read_only_allow_delete”: null}’ -u elastic