欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页  >  IT编程

Python的scrapy之爬取6毛小说网的圣墟

程序员文章站 2022-07-01 23:50:13
闲来无事想看个小说,打算下载到电脑上看,找了半天,没找到可以下载的网站,于是就想自己爬取一下小说内容并保存到本地 圣墟 第一章 沙漠中的彼岸花 - 辰东 - 6毛小说网 http://www.6mao.com/html/40/40184/12601161.html 这是要爬取的网页 观察结构 下一章 ......

闲来无事想看个小说,打算下载到电脑上看,找了半天,没找到可以下载的网站,于是就想自己爬取一下小说内容并保存到本地

圣墟 第一章 沙漠中的彼岸花 - 辰东 - 6毛小说网  http://www.6mao.com/html/40/40184/12601161.html

这是要爬取的网页

观察结构

Python的scrapy之爬取6毛小说网的圣墟

下一章

Python的scrapy之爬取6毛小说网的圣墟

然后开始创建scrapy项目:

Python的scrapy之爬取6毛小说网的圣墟

其中sixmaospider.py:

# -*- coding: utf-8 -*-
import scrapy
from ..items import sixmaoitem


class sixmaospiderspider(scrapy.spider):
    name = 'sixmaospider'
    #allowed_domains = ['http://www.6mao.com']
    start_urls = ['http://www.6mao.com/html/40/40184/12601161.html']  #圣墟

    def parse(self, response):
        novel_biaoti = response.xpath('//div[@id="content"]/h1/text()').extract()
        #print(novel_biaoti)
        novel_neirong=response.xpath('//div[@id="neirong"]/text()').extract()
        print(novel_neirong)
        #print(len(novel_neirong))
        novelitem = sixmaoitem()
        novelitem['novel_biaoti'] = novel_biaoti[0]
        print(novelitem['novel_biaoti'])

        for i in range(0,len(novel_neirong),2):
            #print(novel_neirong[i])

            novelitem['novel_neirong'] = novel_neirong[i]

            yield novelitem

        #下一章
        nextpageurl = response.xpath('//div[@class="s_page"]/a/@href').extract()  # 取下一页的地址
        nexturl='http://www.6mao.com'+nextpageurl[2]
        print('下一章',nexturl)
        if nexturl:
            url = response.urljoin(nexturl)
            # 发送下一页请求并调用parse()函数继续解析
            yield scrapy.request(url, self.parse, dont_filter=false)
            pass
        else:
            print("退出")
        pass

pipelinesio.py 将内容保存到本地文件

import os
print(os.getcwd())


class sixmaopipeline(object):
    def process_item(self, item, spider):
        #print(item['novel'])

        with open('./data/圣墟.txt', 'a', encoding='utf-8') as fp:
            fp.write(item['novel_neirong'])
            fp.flush()
            fp.close()
        return item
    print('写入文件成功')

items.py

import scrapy


class sixmaoitem(scrapy.item):
    # define the fields for your item here like:
    # name = scrapy.field()
    novel_biaoti=scrapy.field()
    novel_neirong=scrapy.field()
    pass

startsixmao.py,直接右键这个运行,项目就开始运行了

from scrapy.cmdline import execute

execute(['scrapy', 'crawl', 'sixmaospider'])

settings.py

log_level='info'   #这是加日志
log_file='novel.log'

downloader_middlewares = {
    'sixmao.middlewares.sixmaodownloadermiddleware': 543,
    'scrapy.contrib.downloadermiddleware.useragent.useragentmiddleware' : none,
    'sixmao.rotate_useragent.rotateuseragentmiddleware' :400  #这行是使用代理
}


item_pipelines = {
    #'sixmao.pipelines.sixmaopipeline': 300,
    'sixmao.pipelinesio.sixmaopipeline': 300,

}  #在pipelines输出管道加入这个

spider_middlewares = {
   'sixmao.middlewares.sixmaospidermiddleware': 543,
}  #打开中间件 其余地方应该不需要改变

rotate_useragent.py  给项目加代理,防止被服务器禁止

# 导入random模块
import random
# 导入useragent用户代理模块中的useragentmiddleware类
from scrapy.downloadermiddlewares.useragent import useragentmiddleware

# rotateuseragentmiddleware类,继承 useragentmiddleware 父类
# 作用:创建动态代理列表,随机选取列表中的用户代理头部信息,伪装请求。
#       绑定爬虫程序的每一次请求,一并发送到访问网址。

# 发爬虫技术:由于很多网站设置反爬虫技术,禁止爬虫程序直接访问网页,
#             因此需要创建动态代理,将爬虫程序模拟伪装成浏览器进行网页访问。
class rotateuseragentmiddleware(useragentmiddleware):
    def __init__(self, user_agent=''):
        self.user_agent = user_agent

    def process_request(self, request, spider):
        #这句话用于随机轮换user-agent
        ua = random.choice(self.user_agent_list)
        if ua:
            # 输出自动轮换的user-agent
            print(ua)
            request.headers.setdefault('user-agent', ua)

    # the default user_agent_list composes chrome,i e,firefox,mozilla,opera,netscape
    # for more user agent strings,you can find it in http://www.useragentstring.com/pages/useragentstring.php
    # 编写头部请求代理列表
    user_agent_list = [\
        "mozilla/5.0 (windows nt 6.1; wow64) applewebkit/537.1 (khtml, like gecko) chrome/22.0.1207.1 safari/537.1"\
        "mozilla/5.0 (x11; cros i686 2268.111.0) applewebkit/536.11 (khtml, like gecko) chrome/20.0.1132.57 safari/536.11",\
        "mozilla/5.0 (windows nt 6.1; wow64) applewebkit/536.6 (khtml, like gecko) chrome/20.0.1092.0 safari/536.6",\
        "mozilla/5.0 (windows nt 6.2) applewebkit/536.6 (khtml, like gecko) chrome/20.0.1090.0 safari/536.6",\
        "mozilla/5.0 (windows nt 6.2; wow64) applewebkit/537.1 (khtml, like gecko) chrome/19.77.34.5 safari/537.1",\
        "mozilla/5.0 (x11; linux x86_64) applewebkit/536.5 (khtml, like gecko) chrome/19.0.1084.9 safari/536.5",\
        "mozilla/5.0 (windows nt 6.0) applewebkit/536.5 (khtml, like gecko) chrome/19.0.1084.36 safari/536.5",\
        "mozilla/5.0 (windows nt 6.1; wow64) applewebkit/536.3 (khtml, like gecko) chrome/19.0.1063.0 safari/536.3",\
        "mozilla/5.0 (windows nt 5.1) applewebkit/536.3 (khtml, like gecko) chrome/19.0.1063.0 safari/536.3",\
        "mozilla/5.0 (macintosh; intel mac os x 10_8_0) applewebkit/536.3 (khtml, like gecko) chrome/19.0.1063.0 safari/536.3",\
        "mozilla/5.0 (windows nt 6.2) applewebkit/536.3 (khtml, like gecko) chrome/19.0.1062.0 safari/536.3",\
        "mozilla/5.0 (windows nt 6.1; wow64) applewebkit/536.3 (khtml, like gecko) chrome/19.0.1062.0 safari/536.3",\
        "mozilla/5.0 (windows nt 6.2) applewebkit/536.3 (khtml, like gecko) chrome/19.0.1061.1 safari/536.3",\
        "mozilla/5.0 (windows nt 6.1; wow64) applewebkit/536.3 (khtml, like gecko) chrome/19.0.1061.1 safari/536.3",\
        "mozilla/5.0 (windows nt 6.1) applewebkit/536.3 (khtml, like gecko) chrome/19.0.1061.1 safari/536.3",\
        "mozilla/5.0 (windows nt 6.2) applewebkit/536.3 (khtml, like gecko) chrome/19.0.1061.0 safari/536.3",\
        "mozilla/5.0 (x11; linux x86_64) applewebkit/535.24 (khtml, like gecko) chrome/19.0.1055.1 safari/535.24",\
        "mozilla/5.0 (windows nt 6.2; wow64) applewebkit/535.24 (khtml, like gecko) chrome/19.0.1055.1 safari/535.24"
       ]

最终运行结果:

Python的scrapy之爬取6毛小说网的圣墟

呐呐呐,这就是一个小的scrapy项目了