Python插入Elasticsearch操作方法解析
程序员文章站
2024-02-09 21:23:52
这篇文章主要介绍了python插入elasticsearch操作方法解析,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友可以参考下
在用scr...
这篇文章主要介绍了python插入elasticsearch操作方法解析,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友可以参考下
在用scrapy做爬虫的时候,需要将数据存入的es中。网上找了两种方法,照葫芦画瓢也能出来,暂记下来:
首先安装了es,版本是5.6.1的较早版本
用pip安装与es版本相对的es相关包
pip install elasticsearch-dsl==5.1.0
方法一:
以下是pipelines.py模块的完整代码
# -*- coding: utf-8 -*- # define your item pipelines here # # don't forget to add your pipeline to the item_pipelines setting # see: https://docs.scrapy.org/en/latest/topics/item-pipeline.html import chardet class sinafinancespiderpipeline(object): def process_item(self, item, spider): return item # 写入到es中,需要在settings中启用这个类 exchangeratespiderespipeline # 需要安装pip install elasticsearch-dsl==5.1.0 注意与es版本需要对应 from elasticsearch_dsl import date,nested,boolean,analyzer,completion,keyword,text,integer,doctype from elasticsearch_dsl.connections import connections connections.create_connection(hosts=['192.168.52.138']) from elasticsearch import elasticsearch es = elasticsearch() class aticletype(doctype): page_from = keyword() # domain报错 domain=keyword() cra_url=keyword() spider = keyword() cra_time = keyword() page_release_time = keyword() page_title = text(analyzer="ik_max_word") page_content = text(analyzer="ik_max_word") class meta: index = "scrapy" doc_type = "sinafinance" # 以下settings和mappings都没起作用,暂且记下 settings = { "number_of_shards": 3, } mappings = { '_id':{'path':'cra_url'} } class exchangeratespiderespipeline(doctype): from elasticsearch5 import elasticsearch es = ['192.168.52.138:9200'] es = elasticsearch(es,sniff_on_start=true) def process_item(self, item, spider): spider.logger.info("-----enter into insert es") article = aticletype() article.page_from=item['page_from'] article.domain=item['domain'] article.cra_url =item['cra_url'] article.spider =item['spider'] article.cra_time =item['cra_time'] article.page_release_time =item['page_release_time'] article.page_title =item['page_title'] article.page_content =item['page_content'] article.save() return item
以上方法能将数据写入es,但是如果重复爬取的话,会重复插入数据,因为 主键 ”_id” 是es自己产生的,找不到自定义_id的入口。于是放弃。
方法二:实现自定义主键写入,覆盖插入
# -*- coding: utf-8 -*- # define your item pipelines here # # don't forget to add your pipeline to the item_pipelines setting # see: https://docs.scrapy.org/en/latest/topics/item-pipeline.html from elasticsearch5 import elasticsearch class sinafinancespiderpipeline(object): def process_item(self, item, spider): return item # 写入到es中,需要在settings中启用这个类 exchangeratespiderespipeline # 需要安装pip install elasticsearch-dsl==5.1.0 注意与es版本需要对应 class sinafinancespiderespipeline(): def __init__(self): self.es = ['192.168.52.138:9200'] # 创建es客户端 self.es = elasticsearch( self.es, # 启动前嗅探es集群服务器 sniff_on_start=true, # es集群服务器结点连接异常时是否刷新es结点信息 sniff_on_connection_fail=true, # 每60秒刷新节点信息 sniffer_timeout=60 ) def process_item(self, item, spider): spider.logger.info("-----enter into insert es") doc = { 'page_from': item['page_from'], 'domain': item['domain'], 'spider': item['spider'], 'page_release_time': item['page_release_time'], 'page_title': item['page_title'], 'page_content': item['page_content'], 'cra_url': item['cra_url'], 'cra_time': item['cra_time'] } self.es.index(index='scrapy', doc_type='sinafinance', body=doc, id=item['cra_url']) return item
搜索数据的方法:
# 字典形式设置body query = { 'query': { 'bool': { 'must': [ {'match': {'_all': 'python web'}} ], 'filter': [ {'term': {'status': 2}} ] } } } ret = es.search(index='articles', doc_type='article', body=query) # 查询数据 data = es.search(index='articles', doc_type='article', body=body) print(data) # 增加 es.index(...) # 修改 es.update(...) # 删除 es.delete()
完成后
在settings.py模块中注册自定义的类
item_pipelines = { # 'sinafinancespider.pipelines.sinafinancespiderpipeline': 300, 'sinafinancespider.pipelines.sinafinancespiderespipeline': 300, }
以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持。