欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

Scrapy框架结合Spynner采集需进行js,ajax动态加载的网页并提取网页信息(以采集微信公众号文章列表为例)

程序员文章站 2022-07-06 18:07:37
...

对于网页的采集有这样几种:

1.静态网页

2.动态网页(需进行js,ajax动态加载数据的网页)

3.需进行模拟登录后才能采集的网页

4.加密的网页

 

3,4的解决方案和思路会在后续blog中陈述

现在只针对1,2的解决方案与思路:

一.静态网页

      对于静态网页的采集解析方法很多很多!java,python都提供了很多的工具包或框架,例如java的httpclient,Htmlunit,Jsoup,HtmlParser等,Python的urllib,urllib2,BeautifulSoup,Scrapy等,不详述,网上资料很多的。

 

二.动态网页

      对于采集来说的动态网页是那些需要经过js,ajax动态加载来获取数据的网页,采集数据的方案分为两种: 

      1.通过抓包工具分析js,ajax的请求,模拟该请求获取js加载后的数据。

      2.调用浏览器的内核,获取加载后的网页源码,然后对源码经行解析

      一个研究爬虫的人js是必须要会得东西,网上学习资料很多,不陈述,写该条只为文章的完整性

调用浏览器内核的工具包Java也有几个,但是不是今天所讲的重点,今天的重点是文章的标题Scrapy框架结合Spynner采集需进行js,ajax动态加载的网页并提取网页信息(以采集微信公众号文章列表为例)

 

 

Start......

1.创建个微信公众号文章列表采集项目(以下简称微采集)

scrapy startproject weixin

 

2.在spider目录下创建一个采集spider文件

vim weixinlist.py

    写入如下代码

from weixin.items import WeixinItem
import sys
sys.path.insert(0,'..')
import scrapy
import time
from scrapy import Spider

class MySpider(Spider):
        name = 'weixinlist'
        allowed_domains = []
        start_urls = [
                'http://weixin.sogou.com/gzh?openid=oIWsFt5QBSP8mn4Jx2WSGw_rCNzQ',
         ]
        download_delay = 1
        print('start init....')

        def parse(self, response):
                sel=scrapy.Selector(response)
                print('hello,world!')
                print(response)
                print(sel)
                list=sel.xpath('//div[@class="txt-box"]/h4')
                items=[]
                for single in list:
                        data=WeixinItem()
                        title=single.xpath('a/text()').extract()
                        link=single.xpath('a/@href').extract()
                        data['title']=title
                        data['link']=link
                        if len(title)>0:
                                print(title[0].encode('utf-8'))
                                print(link)

 

 

3.在items.py中加入WeixinItem类

 

import scrapy


class WeixinItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
        title=scrapy.Field()
        link=scrapy.Field()

 

 

4.在items.py的同级目录下创建一个下载中间件downloadwebkit.py,并向其中写入如下代码:

import spynner
import pyquery
import time
import BeautifulSoup
import sys
from scrapy.http import HtmlResponse
class WebkitDownloaderTest( object ):
    def process_request( self, request, spider ):
#        if spider.name in settings.WEBKIT_DOWNLOADER:
#            if( type(request) is not FormRequest ):
                browser = spynner.Browser()
                browser.create_webview()
                browser.set_html_parser(pyquery.PyQuery)
                browser.load(request.url, 20)
                try:
                        browser.wait_load(10)
                except:
                        pass
                string = browser.html
                string=string.encode('utf-8')
                renderedBody = str(string)
                return HtmlResponse( request.url, body=renderedBody )

 

 

   这段代码就是调用浏览器内核,获取网页加载后的源码

5.在setting.py文件中进行配置,声明下载使用下载中间件

    在底部加上如下代码:

#which spider should use WEBKIT
WEBKIT_DOWNLOADER=['weixinlist']

DOWNLOADER_MIDDLEWARES = {
    'weixin.downloadwebkit.WebkitDownloaderTest': 543,
}

import os
os.environ["DISPLAY"] = ":0"

 

 

 

6.运行程序:

    运行命令:

 

scrapy crawl weixinlist

    运行结果: 

kevinflynndeMacBook-Pro:spiders kevinflynn$ scrapy crawl weixinlist
start init....
2015-07-28 21:13:55 [scrapy] INFO: Scrapy 1.0.1 started (bot: weixin)
2015-07-28 21:13:55 [scrapy] INFO: Optional features available: ssl, http11
2015-07-28 21:13:55 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'weixin.spiders', 'SPIDER_MODULES': ['weixin.spiders'], 'BOT_NAME': 'weixin'}
2015-07-28 21:13:55 [py.warnings] WARNING: :0: UserWarning: You do not have a working installation of the service_identity module: 'No module named service_identity'.  Please install it from <https://pypi.python.org/pypi/service_identity> and make sure all of its dependencies are satisfied.  Without the service_identity module and a recent enough pyOpenSSL to support it, Twisted can perform only rudimentary TLS client hostname verification.  Many valid certificate/hostname mappings may be rejected.

2015-07-28 21:13:55 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState
2015-07-28 21:13:55 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, WebkitDownloaderTest, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2015-07-28 21:13:55 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2015-07-28 21:13:55 [scrapy] INFO: Enabled item pipelines: 
2015-07-28 21:13:55 [scrapy] INFO: Spider opened
2015-07-28 21:13:55 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2015-07-28 21:13:55 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
QFont::setPixelSize: Pixel size <= 0 (0)
2015-07-28 21:14:08 [scrapy] DEBUG: Crawled (200) <GET http://weixin.sogou.com/gzh?openid=oIWsFt5QBSP8mn4Jx2WSGw_rCNzQ> (referer: None)
hello,world!
<200 http://weixin.sogou.com/gzh?openid=oIWsFt5QBSP8mn4Jx2WSGw_rCNzQ>
<Selector xpath=None data=u'<html><head><meta http-equiv="X-UA-Compa'>
互联网协议入门
[u'http://mp.weixin.qq.com/s?__biz=MzA4MjEyNTA5Mw==&mid=210032701&idx=1&sn=6b1fc2bc5d4eb0f87513751e4ccf610c&3rd=MzA3MDU4NTYzMw==&scene=6#rd']
自己动手写贝叶斯分类器给图书分类
[u'http://mp.weixin.qq.com/s?__biz=MzA4MjEyNTA5Mw==&mid=210013947&idx=1&sn=1f36ba5794e22d0fb94a9900230e74ca&3rd=MzA3MDU4NTYzMw==&scene=6#rd']
不当免费技术支持的10种方法
[u'http://mp.weixin.qq.com/s?__biz=MzA4MjEyNTA5Mw==&mid=209998175&idx=1&sn=216106034a3b4afea6e67f813ce1971f&3rd=MzA3MDU4NTYzMw==&scene=6#rd']
以 Python 为实例,介绍贝叶斯理论
[u'http://mp.weixin.qq.com/s?__biz=MzA4MjEyNTA5Mw==&mid=209998175&idx=2&sn=2f3dee873d7350dfe9546ab4a9323c05&3rd=MzA3MDU4NTYzMw==&scene=6#rd']
我从腾讯那“偷了”3000万QQ用户数据,出了份很有趣的...
[u'http://mp.weixin.qq.com/s?__biz=MzA4MjEyNTA5Mw==&mid=209980651&idx=1&sn=11fd40a2dee5132b0de8d4c79a97dac2&3rd=MzA3MDU4NTYzMw==&scene=6#rd']
如何用 Spark 快速开发应用?
[u'http://mp.weixin.qq.com/s?__biz=MzA4MjEyNTA5Mw==&mid=209820653&idx=2&sn=23712b78d82fb412e960c6aa1e361dd3&3rd=MzA3MDU4NTYzMw==&scene=6#rd']
一起来写个简单的解释器(1)
[u'http://mp.weixin.qq.com/s?__biz=MzA4MjEyNTA5Mw==&mid=209797651&idx=1&sn=15073e27080e6b637c8d24b6bb815417&3rd=MzA3MDU4NTYzMw==&scene=6#rd']
那个直接在机器码中改 Bug 的家伙
[u'http://mp.weixin.qq.com/s?__biz=MzA4MjEyNTA5Mw==&mid=209762756&idx=1&sn=04ae1bc3a366d358f474ac3e9a85fb60&3rd=MzA3MDU4NTYzMw==&scene=6#rd']
把一个库开源,你该做些什么
[u'http://mp.weixin.qq.com/s?__biz=MzA4MjEyNTA5Mw==&mid=209762756&idx=2&sn=0ac961ffd82ead6078a60f25fed3c2c4&3rd=MzA3MDU4NTYzMw==&scene=6#rd']
程序员的困境
[u'http://mp.weixin.qq.com/s?__biz=MzA4MjEyNTA5Mw==&mid=209696436&idx=1&sn=8cb55b03c8b95586ba4498c64fa54513&3rd=MzA3MDU4NTYzMw==&scene=6#rd']
2015-07-28 21:14:08 [scrapy] INFO: Closing spider (finished)
2015-07-28 21:14:08 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/response_bytes': 131181,
 'downloader/response_count': 1,
 'downloader/response_status_count/200': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2015, 7, 28, 13, 14, 8, 958071),
 'log_count/DEBUG': 2,
 'log_count/INFO': 7,
 'log_count/WARNING': 1,
 'response_received_count': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2015, 7, 28, 13, 13, 55, 688111)}
2015-07-28 21:14:08 [scrapy] INFO: Spider closed (finished)
QThread: Destroyed while thread is still running
kevinflynndeMacBook-Pro:spiders kevinflynn$