Python 详解通过Scrapy框架实现爬取CSDN全站热榜标题热词流程
前言
接着我的上一篇:python 详解爬取并统计csdn全站热榜标题关键词词频流程
我换成scrapy架构也实现了一遍。获取页面源码底层原理是一样的,scrapy架构更系统一些。下面我会把需要注意的问题,也说明一下。
提供一下github仓库地址:github本项目地址
环境部署
scrapy安装
pip install scrapy -i https://pypi.douban.com/simple
selenium安装
pip install selenium -i https://pypi.douban.com/simple
jieba安装
pip install jieba -i https://pypi.douban.com/simple
ide:pycharm
google chrome driver下载对应版本:google chrome driver下载地址
检查浏览器版本,下载对应版本。
实现过程
下面开始搞起。
创建项目
使用scrapy命令创建我们的项目。
scrapy startproject csdn_hot_words
项目结构,如同官方给出的结构。
定义item实体
按照之前的逻辑,主要属性为标题关键词对应出现次数的字典。代码如下:
# define here the models for your scraped items # # see documentation in: # https://docs.scrapy.org/en/latest/topics/items.html import scrapy class csdnhotwordsitem(scrapy.item): # define the fields for your item here like: # name = scrapy.field() words = scrapy.field()
关键词提取工具
使用jieba分词获取工具。
#!/usr/bin/env python # -*- coding: utf-8 -*- # @time : 2021/11/5 23:47 # @author : 至尊宝 # @site : # @file : analyse_sentence.py import jieba.analyse def get_key_word(sentence): result_dic = {} words_lis = jieba.analyse.extract_tags( sentence, topk=3, withweight=true, allowpos=()) for word, flag in words_lis: if word in result_dic: result_dic[word] += 1 else: result_dic[word] = 1 return result_dic
爬虫构造
这里需要给爬虫初始化一个浏览器参数,用来实现页面的动态加载。
#!/usr/bin/env python # -*- coding: utf-8 -*- # @time : 2021/11/5 23:47 # @author : 至尊宝 # @site : # @file : csdn.py import scrapy from selenium import webdriver from selenium.webdriver.chrome.options import options from csdn_hot_words.items import csdnhotwordsitem from csdn_hot_words.tools.analyse_sentence import get_key_word class csdnspider(scrapy.spider): name = 'csdn' # allowed_domains = ['blog.csdn.net'] start_urls = ['https://blog.csdn.net/rank/list'] def __init__(self): chrome_options = options() chrome_options.add_argument('--headless') # 使用无头谷歌浏览器模式 chrome_options.add_argument('--disable-gpu') chrome_options.add_argument('--no-sandbox') self.browser = webdriver.chrome(chrome_options=chrome_options, executable_path="e:\\chromedriver_win32\\chromedriver.exe") self.browser.set_page_load_timeout(30) def parse(self, response, **kwargs): titles = response.xpath("//div[@class='hosetitem-title']/a/text()") for x in titles: item = csdnhotwordsitem() item['words'] = get_key_word(x.get()) yield item
代码说明
1、这里使用的是chrome的无头模式,就不需要有个浏览器打开再访问,都是后台执行的。
2、需要添加chromedriver的执行文件地址。
3、在parse的部分,可以参考之前我文章的xpath,获取到标题并且调用关键词提取,构造item对象。
中间件代码构造
添加js代码执行内容。中间件完整代码:
# define here the models for your spider middleware # # see documentation in: # https://docs.scrapy.org/en/latest/topics/spider-middleware.html from scrapy import signals from scrapy.http import htmlresponse from selenium.common.exceptions import timeoutexception import time from selenium.webdriver.chrome.options import options # useful for handling different item types with a single interface from itemadapter import is_item, itemadapter class csdnhotwordsspidermiddleware: # not all methods need to be defined. if a method is not defined, # scrapy acts as if the spider middleware does not modify the # passed objects. @classmethod def from_crawler(cls, crawler): # this method is used by scrapy to create your spiders. s = cls() crawler.signals.connect(s.spider_opened, signal=signals.spider_opened) return s def process_spider_input(self, response, spider): # called for each response that goes through the spider # middleware and into the spider. # should return none or raise an exception. return none def process_spider_output(self, response, result, spider): # called with the results returned from the spider, after # it has processed the response. # must return an iterable of request, or item objects. for i in result: yield i def process_spider_exception(self, response, exception, spider): # called when a spider or process_spider_input() method # (from other spider middleware) raises an exception. # should return either none or an iterable of request or item objects. pass def process_start_requests(self, start_requests, spider): # called with the start requests of the spider, and works # similarly to the process_spider_output() method, except # that it doesn't have a response associated. # must return only requests (not items). for r in start_requests: yield r def spider_opened(self, spider): spider.logger.info('spider opened: %s' % spider.name) class csdnhotwordsdownloadermiddleware: # not all methods need to be defined. if a method is not defined, # scrapy acts as if the downloader middleware does not modify the # passed objects. @classmethod def from_crawler(cls, crawler): # this method is used by scrapy to create your spiders. s = cls() crawler.signals.connect(s.spider_opened, signal=signals.spider_opened) return s def process_request(self, request, spider): js = ''' let height = 0 let interval = setinterval(() => { window.scrollto({ top: height, behavior: "smooth" }); height += 500 }, 500); settimeout(() => { clearinterval(interval) }, 20000); ''' try: spider.browser.get(request.url) spider.browser.execute_script(js) time.sleep(20) return htmlresponse(url=spider.browser.current_url, body=spider.browser.page_source, encoding="utf-8", request=request) except timeoutexception as e: print('超时异常:{}'.format(e)) spider.browser.execute_script('window.stop()') finally: spider.browser.close() def process_response(self, request, response, spider): # called with the response returned from the downloader. # must either; # - return a response object # - return a request object # - or raise ignorerequest return response def process_exception(self, request, exception, spider): # called when a download handler or a process_request() # (from other downloader middleware) raises an exception. # must either: # - return none: continue processing this exception # - return a response object: stops process_exception() chain # - return a request object: stops process_exception() chain pass def spider_opened(self, spider): spider.logger.info('spider opened: %s' % spider.name)
制作自定义pipeline
定义按照词频统计最终结果输出到文件。代码如下:
# define your item pipelines here # # don't forget to add your pipeline to the item_pipelines setting # see: https://docs.scrapy.org/en/latest/topics/item-pipeline.html # useful for handling different item types with a single interface from itemadapter import itemadapter class csdnhotwordspipeline: def __init__(self): self.file = open('result.txt', 'w', encoding='utf-8') self.all_words = [] def process_item(self, item, spider): self.all_words.append(item) return item def close_spider(self, spider): key_word_dic = {} for y in self.all_words: print(y) for k, v in y['words'].items(): if k.lower() in key_word_dic: key_word_dic[k.lower()] += v else: key_word_dic[k.lower()] = v word_count_sort = sorted(key_word_dic.items(), key=lambda x: x[1], reverse=true) for word in word_count_sort: self.file.write('{},{}\n'.format(word[0], word[1])) self.file.close()
settings配置
配置上要做一些调整。如下调整:
# scrapy settings for csdn_hot_words project # # for simplicity, this file contains only settings considered important or # commonly used. you can find more settings consulting the documentation: # # https://docs.scrapy.org/en/latest/topics/settings.html # https://docs.scrapy.org/en/latest/topics/downloader-middleware.html # https://docs.scrapy.org/en/latest/topics/spider-middleware.html bot_name = 'csdn_hot_words' spider_modules = ['csdn_hot_words.spiders'] newspider_module = 'csdn_hot_words.spiders' # crawl responsibly by identifying yourself (and your website) on the user-agent # user_agent = 'csdn_hot_words (+http://www.yourdomain.com)' user_agent = 'mozilla/5.0' # obey robots.txt rules robotstxt_obey = false # configure maximum concurrent requests performed by scrapy (default: 16) # concurrent_requests = 32 # configure a delay for requests for the same website (default: 0) # see https://docs.scrapy.org/en/latest/topics/settings.html#download-delay # see also autothrottle settings and docs download_delay = 30 # the download delay setting will honor only one of: # concurrent_requests_per_domain = 16 # concurrent_requests_per_ip = 16 # disable cookies (enabled by default) cookies_enabled = false # disable telnet console (enabled by default) # telnetconsole_enabled = false # override the default request headers: default_request_headers = { 'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'accept-language': 'en', 'user-agent': 'mozilla/5.0 (windows nt 6.2; wow64) applewebkit/537.36 (khtml, like gecko) chrome/27.0.1453.94 safari/537.36' } # enable or disable spider middlewares # see https://docs.scrapy.org/en/latest/topics/spider-middleware.html spider_middlewares = { 'csdn_hot_words.middlewares.csdnhotwordsspidermiddleware': 543, } # enable or disable downloader middlewares # see https://docs.scrapy.org/en/latest/topics/downloader-middleware.html downloader_middlewares = { 'csdn_hot_words.middlewares.csdnhotwordsdownloadermiddleware': 543, } # enable or disable extensions # see https://docs.scrapy.org/en/latest/topics/extensions.html # extensions = { # 'scrapy.extensions.telnet.telnetconsole': none, # } # configure item pipelines # see https://docs.scrapy.org/en/latest/topics/item-pipeline.html item_pipelines = { 'csdn_hot_words.pipelines.csdnhotwordspipeline': 300, } # enable and configure the autothrottle extension (disabled by default) # see https://docs.scrapy.org/en/latest/topics/autothrottle.html # autothrottle_enabled = true # the initial download delay # autothrottle_start_delay = 5 # the maximum download delay to be set in case of high latencies # autothrottle_max_delay = 60 # the average number of requests scrapy should be sending in parallel to # each remote server # autothrottle_target_concurrency = 1.0 # enable showing throttling stats for every response received: # autothrottle_debug = false # enable and configure http caching (disabled by default) # see https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings # httpcache_enabled = true # httpcache_expiration_secs = 0 # httpcache_dir = 'httpcache' # httpcache_ignore_http_codes = [] # httpcache_storage = 'scrapy.extensions.httpcache.filesystemcachestorage'
执行主程序
可以通过scrapy的命令执行,但是为了看日志方便,加了一个主程序代码。
#!/usr/bin/env python # -*- coding: utf-8 -*- # @time : 2021/11/5 22:41 # @author : 至尊宝 # @site : # @file : main.py from scrapy import cmdline cmdline.execute('scrapy crawl csdn'.split())
执行结果
执行部分日志
得到result.txt结果。
总结
看,java还是yyds。不知道为什么2021这个关键词也可以排名靠前。于是我觉着把我标题也加上2021。
github项目地址在发一遍:github本项目地址
申明一下,本文案例仅研究探索使用,不是为了恶意攻击。
分享:
凡夫俗子不下苦功夫、死力气去努力做成一件事,根本就没资格去谈什么天赋不天赋。
——烽火戏诸侯《剑来》
如果本文对你有用的话,请不要吝啬你的赞,谢谢。
以上就是python 详解通过scrapy框架实现爬取csdn全站热榜标题热词流程的详细内容,更多关于python scrapy框架的资料请关注其它相关文章!