欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

es配置中文和拼音分词器

程序员文章站 2024-02-21 22:43:16
...

1.简介

es默认使用standard分词器 es还有其他分词器比如simple  writespace language

2.配置中文分词器(需先安装git maven unzip)

git clone https://github.com/medcl/elasticsearch-analysis-ik.git
cd elasticsearch-analysis-ik
mvn package

mkdir /usr/local/elasticsearch/plugins/ik
mv target/releases/elasticsearch-analysis-ik-6.4.0.zip /usr/local/elasticsearch/plugins/ik
cd /usr/local/elasticsearch/plugins/ik
unzip elasticsearch-analysis-ik-6.4.0.zip

rm -rf elasticsearch-analysis-ik-6.4.0.zip

chown -R es:gp ik
./bin/elasticsearch -d

3.测试中文分词器

# ik_max_word尽可能多切分单词
GET _analyze
{
  "analyzer":"ik_max_word",
  "text":"阿里巴巴"
}

ik_smart尽可能少切分单词
GET _analyze
{
  "analyzer":"ik_smart",
  "text":"阿里巴巴"
}

4.配置拼音分词器

git clone https://github.com/medcl/elasticsearch-analysis-pinyin.git
cd elasticsearch-analysis-pinyin
mvn package

mkdir /usr/local/elasticsearch/plugins/pinyin
mv target/releases/elasticsearch-analysis-pinyin-6.4.0.zip /usr/local/elasticsearch/plugins/pinyin
cd /usr/local/elasticsearch/plugins/pinyin
unzip elasticsearch-analysis-pinyin-6.4.0.zip

rm -rf elasticsearch-analysis-pinyin-6.4.0.zip

chown -R es:gp pinyin
./bin/elasticsearch -d

5.测试拼音分词器

GET _analyze
{
  "text": "阿里巴巴",
  "analyzer": "pinyin"
}