Python 爬虫框架Scrapy ITEM PIPELINE
程序员文章站
2022-03-02 22:49:38
...
Typical uses of item pipelines are:
- cleansing HTML data
- validating scraped data (checking that the items contain certain fields)
- checking for duplicates (and dropping them)
- storing the scraped item in a database
ITEM PIPELINE作用:
- 清理HTML数据
- 验证爬取的数据(检查item包含某些字段)
- 去重(并丢弃)【预防数据去重,真正去重是在url,即请求阶段做】
- 将爬取结果保存到数据库中
ITEM PIPELINE核心方法:
- open_spider(spider):该方法非必需,在Spider开启时被调用,主要做一些初始化操作,如连接数据库等
- close_spider(spider):该方法非必需,在Spider关闭时被调用,主要做一些如关闭数据库连接等收尾性质的工作
- from_crawler(cls,crawler):该方法非必需,Spider启用时调用,早于open_spider()方法,是一个类方法,用@classmethod标识,它与__init__函有关,这里我们不详解(一般我们不对它进行修改)
- process_item(item,spider):该方法必需实现,定义的Item pipeline会默认调用该方法对Item进行处理,它返回Item类型的值或者抛出DropItem异常
官方例子:
import scrapy
import hashlib
from urllib.parse import quote
class ScreenshotPipeline(object):
"""Pipeline that uses Splash to render screenshot of
every Scrapy item."""
SPLASH_URL = "http://localhost:8050/render.png?url={}"
async def process_item(self, item, spider):
encoded_item_url = quote(item["url"])
screenshot_url = self.SPLASH_URL.format(encoded_item_url)
request = scrapy.Request(screenshot_url)
response = await spider.crawler.engine.download(request, spider)
if response.status != 200:
# Error happened, return item.
return item
# Save screenshot to file, filename will be hash of url.
url = item["url"]
url_hash = hashlib.md5(url.encode("utf8")).hexdigest()
filename = "{}.png".format(url_hash)
with open(filename, "wb") as f:
f.write(response.body)
# Store filename in item.
item["screenshot_filename"] = filename
return item
注意需要修改配置文件:
ITEM_PIPELINES = {
'myproject.pipelines.PricePipeline': 300,
'myproject.pipelines.JsonWriterPipeline': 800,
}
感觉像个拦截器。
可以参考:
https://doc.scrapy.org/en/latest/topics/feed-exports.html#topics-feed-exports
https://www.cnblogs.com/518894-lu/p/9053939.html