欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

爬虫相关总结

程序员文章站 2024-02-22 16:12:34
...
爬虫

在做防止网站被爬虫爬取数据的时候,其中最简单的方式就是判断请求是程序生产的,还是人为生成的。 当然,最简单的就是通过请求头进行判断。下面给一个例子:

In [9]: import requests
In [10]: url = 'http://www.baidu.com'
In [11]: resp = requests.get(url)
In [12]: resp.request.headers
Out[12]: {'User-Agent': 'python-requests/2.18.4', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}

上面可以看'User-Agent': 'python-requests/2.18.4'。
这里,百度是允许'python-requests/2.18.4' 访问的。

下面给一个不被允许的网页请求例子:

In [6]: url = 'https://www.amazon.cn/%E7%BD%91%E7%BB%9C%E7%88%AC%E8%99%AB%E5%85%A8%E8%A7%A3%E6%9E%90-%E
   ...: 6%8A%80%E6%9C%AF-%E5%8E%9F%E7%90%86%E4%B8%8E%E5%AE%9E%E8%B7%B5-%E7%BD%97%E5%88%9A/dp/B06XXMZJN6
   ...: /ref=sr_1_1?s=books&ie=UTF8&qid=1505556410&sr=1-1&keywords=%E7%88%AC%E8%99%AB'

In [7]: resq = requests.get(url)
In [8]: resq.status_code
Out[8]: 503

In [10]: resq.text
Out[10]: '<!--\n        To discuss automated access to Amazon data please contact [email protected]\n        For information about migrating to our APIs refer to our Marketplace APIs at https://developer.amazonservices.com.cn/index.html/ref=rm_5_sv, or our Product Advertising API at https://associates.amazon.cn/gp/advertising/api/detail/main.html/ref=rm_5_ac for advertising use cases.\n-->\n<html><head><meta http-equiv="Content-Type" content="text/html;charset=utf-8"><title>äº\x9a马é\x80\x8a</title><body style="text-align:center;"><br><div style="width:600px;margin:0 auto;text-align:left;"><h2>æ\x84\x8få¤\x96é\x94\x99误</h2></div><br><div style="width:500px;margin:0 auto;text-align:left;"><font color="red">æ\x8a¥æ\xad\x89ï¼\x8cç\x94±äº\x8eç¨\x8båº\x8fæ\x89§è¡\x8cæ\x97¶ï¼\x8cé\x81\x87å\x88°æ\x84\x8få¤\x96é\x94\x99误ï¼\x8cæ\x82¨å\x88\x9aå\x88\x9aæ\x93\x8dä½\x9c没æ\x9c\x89æ\x89§è¡\x8cæ\x88\x90å\x8a\x9fï¼\x8c请ç¨\x8då\x90\x8eé\x87\x8dè¯\x95ã\x80\x82æ\x88\x96å°\x86æ\xad¤é\x94\x99误æ\x8a¥å\x91\x8aç»\x99æ\x88\x91们ç\x9a\x84客æ\x9c\x8dä¸\xadå¿\x83ï¼\x9a<a href="mailto:[email protected]">[email protected]</a></font><br><br>æ\x8e¨è\x8d\x90æ\x82¨<a href="javascript:history.back(1)">è¿\x94å\x9b\x9eä¸\x8aä¸\x80页</a>ï¼\x8c确认æ\x82¨ç\x9a\x84æ\x93\x8dä½\x9cæ\x97\xa0误å\x90\x8eï¼\x8cå\x86\x8d继ç»\xadå\x85¶ä»\x96æ\x93\x8dä½\x9cã\x80\x82<br>æ\x82¨å\x8f¯ä»¥é\x80\x9aè¿\x87äº\x9a马é\x80\x8a<a href="http://www.amazon.cn/help/ref=cs_503_link/" target="_blank">帮å\x8a©ä¸\xadå¿\x83</a>ï¼\x8cè\x8e·å¾\x97æ\x9b´å¤\x9aç\x9a\x84帮å\x8a©ã\x80\x82<br></div></body></html>'

上面其中有一段内容是:To discuss automated access to Amazon data please contact [email protected].\n For information about migrating to our APIs refer to our Marketplace APIs at https://developer.amazonservices.com.cn/index.html/ref=rm_5_sv, or our Product Advertising API at https://associates.amazon.cn/gp/advertising/api/detail/main.html/ref=rm_5_ac for advertising use cases.\n-->\n。

我给它加一个 header 头信息,再看一下结果:

In [12]:  headers = { "User-Agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, lik
    ...: e Gecko) Chrome/57.0.2987.133 Safari/537.36" }
    ...:

In [13]: resq = requests.get(url, headers=headers)

In [14]: resq.status_code
Out[14]: 200

In [15]: resq.request.headers
Out[15]: {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}

以上内容,可以看到效果。

下面开始介绍今天的工具库

fake-useragent

pip install fake-useragent
In [1]: from fake_useragent import UserAgent

In [2]: ua = UserAgent()

In [3]: ua.ie
Out[3]: 'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 2.0.50727; Media Center PC 6.0)'

In [4]: ua.google
Out[4]: 'Mozilla/5.0 (X11; CrOS i686 3912.101.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.116 Safari/537.36'

In [5]: ua.msie
Out[5]: 'Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; Trident/4.0; InfoPath.2; SV1; .NET CLR 2.0.50727; WOW64)'

In [6]: ua.opera
Out[6]: 'Opera/9.80 (Windows NT 5.1; U; cs) Presto/2.7.62 Version/11.01'
In [8]: ua.chrome
Out[8]: 'Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2225.0 Safari/537.36'

In [9]: ua.firefox
Out[9]: 'Mozilla/5.0 (Windows NT 6.2; Win64; x64; rv:21.0.0) Gecko/20121011 Firefox/21.0.0'

In [10]: ua.ff
Out[10]: 'Mozilla/5.0 (Windows NT 6.1; rv:22.0) Gecko/20130405 Firefox/22.0'

In [11]: ua.safari
Out[11]: 'Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_6; en-us) AppleWebKit/533.20.25 (KHTML, like Gecko) Version/5.0.4 Safari/533.20.27'

In [12]: ua.random
Out[12]: 'Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.2117.157 Safari/537.36'

In [13]: ua.random
Out[13]: 'Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2225.0 Safari/537.36'

In [14]: ua.random
Out[14]: 'Mozilla/5.0 (X11; OpenBSD i386) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.125 Safari/537.36'

In [15]: ua.random
Out[15]: 'Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2224.3 Safari/537.36'

In [17]: ua['google']
Out[17]: 'Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1500.55 Safari/537.36'

以上是获取ua的参数

上面的代码可以修改成如下:

import requests
from fake_useragent import UserAgent
ua = UserAgent()
headers = {'User-Agent': ua.random}
url = ''
resp = requests.get(url, headers=headers)

源码

class FakeUserAgent(object):
    def __init__(
        self,
        cache=True,
        use_cache_server=True,
        path=settings.DB,
        fallback=None,
        verify_ssl=True,
        safe_attrs=tuple(),
    ):
        assert isinstance(cache, bool), \
            'cache must be True or False'

        self.cache = cache

        assert isinstance(use_cache_server, bool), \
            'use_cache_server must be True or False'

        self.use_cache_server = use_cache_server

        assert isinstance(path, str_types), \
            'path must be string or unicode'

        self.path = path

        if fallback is not None:
            assert isinstance(fallback, str_types), \
                'fallback must be string or unicode'

        self.fallback = fallback

        assert isinstance(verify_ssl, bool), \
            'verify_ssl must be True or False'

        self.verify_ssl = verify_ssl

        assert isinstance(safe_attrs, (list, set, tuple)), \
            'safe_attrs must be list\\tuple\\set of strings or unicode'

        if safe_attrs:
            str_types_safe_attrs = [
                isinstance(attr, str_types) for attr in safe_attrs
            ]

            assert all(str_types_safe_attrs), \
                'safe_attrs must be list\\tuple\\set of strings or unicode'

        self.safe_attrs = set(safe_attrs)

        # initial empty data
        self.data = {}
        # TODO: change source file format
        # version 0.1.4+ migration tool
        self.data_randomize = []
        self.data_browsers = {}

        self.load()

这个工具库有很多属性和功能,自己看源码,就可以轻松掌握。

ua_parser

这个库是对ua进行解析的,功能好用。下面一些例子,具体深入,自行了解。

In [1]: from fake_useragent import UserAgent

In [2]: from ua_parser import user_agent_parser

In [3]: ua = UserAgent()
In [5]: user_agent_parser.ParseUserAgent(ua.google)
Out[5]: {'family': 'Chrome', 'major': '28', 'minor': '0', 'patch': '1467'}

In [6]: user_agent_parser.ParseOS(ua.google)
Out[6]:
{'family': 'Windows 7',
 'major': None,
 'minor': None,
 'patch': None,
 'patch_minor': None}
 
 
In [8]: user_agent_parser.ParseDevice(ua.google)
Out[8]: {'brand': None, 'family': 'Other', 'model': None}

In [9]: dir(user_agent_parser)
Out[9]:
['DEVICE_PARSERS',
 'DeviceParser',
 'GetFilters',
 'MAX_CACHE_SIZE',
 'OSParser',
 'OS_PARSERS',
 'Parse',
 'ParseDevice',
 'ParseOS',
 'ParseUserAgent',
 'ParseWithJSOverrides',
 'Pretty',
 'PrettyOS',
 'PrettyUserAgent',
 'UA_PARSER_YAML',
 'USER_AGENT_PARSERS',
 'UserAgentParser',
 '__author__',
 '__builtins__',
 '__cached__',
 '__doc__',
 '__file__',
 '__loader__',
 '__name__',
 '__package__',
 '__spec__',
 '_parse_cache',
 'absolute_import',
 'os',
 're']

普通反爬虫机制的应对策略

header检验

最简单的反爬机制,就是检查HTTP请求的Headers信息,包括User-Agent, Referer、Cookies等。

User-Agent

User-Agent是检查用户所用客户端的种类和版本,在Scrapy中,通常是在下载器中间件中进行处理。比如在setting.py中建立一个包含很多浏览器User-Agent的列表,然后新建一个random_user_agent文件:

Referer

Referer是检查此请求由哪里来,通常可以做图片的盗链判断。在Scrapy中,如果某个页面url是通过之前爬取的页面提取到,Scrapy会自动把之前爬取的页面url作为Referfer。也可以通过上面的方式自己定义Referfer字段。

Cookies

网站可能会检测Cookie中session_id的使用次数,如果超过限制,就触发反爬策略。所以可以在Scrapy中设置COOKIES_ENABLED = False让请求不带Cookies。
也有网站强制开启Cookis,这时就要麻烦一点了。可以另写一个简单的爬虫,定时向目标网站发送不带Cookies的请求,提取响应中Set-cookie字段信息并保存。爬取网页时,把存储起来的Cookies带入Headers中。

X-Forwarded-For

在请求头中添加X-Forwarded-For字段,将自己申明为一个透明的代理服务器,一些网站对代理服务器会手软一些。

限制IP的请求数量

如果某一IP的请求速度过快,就触发反爬机制。当然可以通过放慢爬取速度绕过,这要以爬取时间大大增长为代价。另一种方法就是添加代理

动态加载

现在越来越多的网站使用ajax动态加载内容,这时候可以先截取ajax请求分析一下,有可能根据ajax请求构造出相应的API请求的URL就可以直接获取想要的内容,通常是json格式,反而还不用去解析HTML。