欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

爬虫相关

程序员文章站 2024-02-22 15:16:34
...

python中的爬虫,有两个特别方便的模块,requests模块以及BeautifulSoup模块,requests模块主要提供http访问,BeautifulSoup模块主要解决爬取页面后对页面标签以及内容的处理。

例子:爬取汽车之家的新闻

先看个例子热热身:

#!/usr/bin/python
# -*- coding:utf-8 -*-
import requests

response = requests.get('http://www.autohome.com.cn/news/')
# print(response.content) # 字节
response.encoding = 'gbk'
html = response.text    # 字符串
# 字符串结构化成为特殊的数据结构

from bs4 import BeautifulSoup
obj = BeautifulSoup(html,'html.parser')
# 找到符合条件的第一个标签
tag = obj.find(name='div',id='auto-channel-lazyload-article')
# 列表类型【标签对象,标签对象,标签对象,标签对象,标签对象,】
li_list = tag.find_all(name='li')

for item in li_list:
    h3_obj = item.find(name='h3')
    p_obj = item.find(name='p')
    a_obj = item.find(name='a')
    if h3_obj:
        print(h3_obj.text)
        print(p_obj.text)
        print(a_obj.attrs.get('href'))

例子二:对于需要登录的页面进行的页面爬取

例子中是需要对github页面爬取处理的思路:

# 用户登录后才能查看的页面 https://github.com/settings/repositories
"""
实例二:登录github,访问个人页面
    1. get形式访问 https://github.com/login
        获取CSRFtoken

        requests.get('https://github.com/login')

    2. 向 https://github.com/session 发送POST
        commit:Sign in
        utf8:✓
        authenticity_token:M8MJY6nIExIiqGfMWiz9b2A3hR7PZmo9Slh5g3mB7qZhWxFDo7rawaZT1sdT+/nvr6anQ3Hs+i/hwE0pE4rGMw==
        login:root
        password:sdfgsdfg


        response = requests.post(
            url= "https://github.com/session",
            data={
                commit:Sign in
                utf8:✓
                authenticity_token:M8MJY6nIExIiqGfMWiz9b2A3hR7PZmo9Slh5g3mB7qZhWxFDo7rawaZT1sdT+/nvr6anQ3Hs+i/hwE0pE4rGMw==
                login:root
                password:sdfgsdfg
            }
        )

        response.text
        response.cookies

    3. 请求需要登录之后才能有权限查看的url
        requests.get(
            url='sdfasdfasdfasdfasdfasdf',
            cookies={....}
        )
"""

import requests
from bs4 import BeautifulSoup

# 第一步:获取csrf
r1 = requests.get(url='https://github.com/login')
b1 = BeautifulSoup(r1.text,'html.parser')
tag = b1.find(name='input',attrs={'name': 'authenticity_token'})
token = tag.get('value')
r1_cookie = r1.cookies.get_dict()

print('第一次',r1_cookie)
# 第二步:发送POST请求携带用户名密码,携带cookie,后台进行授权
# r2 = requests.post()
# r2 = requests.request('post',)
# print(token)
r2 = requests.post(
    url='https://github.com/session',
    data={
        'commit': 'Sign in',
        'utf8': '',
        'authenticity_token': token,
        'login': '你的账号',
        'password': '你的密码'
    },
    cookies = r1_cookie
)
# 1. 根据
    # r2获取状态码
    # r2获取响应头【location】
# 2. 根据错误提示

r2_cookie = r2.cookies.get_dict()
r1_cookie.update(r2_cookie)

print('第二次',r2_cookie)
# 第三步:访问个人页面,携带cookie
r3 = requests.get(
    url='https://github.com/settings/repositories',
    cookies=r1_cookie
)
print(r3.text)

所以,需要登录页面才可查看的页面,需要附带cookies信息过去:

requests.get(
            url='sdfasdfasdfasdfasdfasdf',
            cookies={....}
        )

请求头和请求体

有一篇文章写的挺好,适合入门:HTTP请求行、请求头、请求体详解

扩展

coockies,有请求头响应体两部分组成

http请求,有请求头请求体两部分组成

请求头和请求体有两个\r\n分割,请求头中每一个由一个\r\n分割。当然,这是转为text的显示,实际中这些是被隐藏掉的。HTTP使用\r\n来换行。

"GET /mike.html HTTP/1.1\r\n"
        "Accept: image/gif, image/jpeg, image/pjpeg, application/x-ms-application, application/xaml+xml, application/x-ms-xbap, */*\r\n"
        "Accept-Language: zh-Hans-CN,zh-Hans;q=0.8,en-US;q=0.5,en;q=0.3\r\n"
        "User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 10.0; WOW64; Trident/7.0; .NET4.0C; .NET4.0E; .NET CLR 2.0.50727; .NET CLR 3.0.30729; .NET CLR 3.5.30729)\r\n"
        "Accept-Encoding: gzip, deflate\r\n"
        "Host: 192.168.48.98:8000\r\n"
        "Connection: Keep-Alive\r\n"
        "\r\n"
\r\n\r\n
请求body....

Content-Type

MediaType,即是Internet Media Type,互联网媒体类型;也叫做MIME类型,在Http协议消息头中,使用Content-Type来表示具体请求中的媒体类型信息。

类型格式:type/subtype(;parameter)? type  
主类型,任意的字符串,如text,如果是*号代表所有;   
subtype 子类型,任意的字符串,如html,如果是*号代表所有;   
parameter 可选,一些参数,如Accept请求头的q参数, Content-Type的 charset参数。   

例如: Content-Type: text/html;charset:utf-8;

常见的媒体格式类型如下:

    text/html : HTML格式
    text/plain :纯文本格式      
    text/xml :  XML格式
    image/gif :gif图片格式    
    image/jpeg :jpg图片格式 
    image/png:png图片格式
   #以application开头的媒体格式类型:

   application/xhtml+xml :XHTML格式
   application/xml     : XML数据格式
   application/atom+xml  :Atom XML聚合格式    
   application/json    : JSON数据格式
   application/pdf       :pdf格式  
   application/msword  : Word文档格式
   application/octet-stream : 二进制流数据(如常见的文件下载)
   application/x-www-form-urlencoded : <form encType=””>中默认的encType,form表单数据被编码为key/value格式发送到服务器(表单默认的提交数据的格式)

另外一种常见的媒体格式是上传文件之时使用的:

multipart/form-data : 需要在表单中进行文件上传时,就需要使用该格式

以上就是我们在日常的开发中,经常会用到的若干content-type的内容格式。

location重定向

如果http header中有跳转,会有location表示其重定向

requests模块

Python标准库中提供了:urllib、urllib2、httplib等模块以供Http请求,但毕竟已现老态,所有有了requests模块,在Python内置模块的基础上进行了高度的封装,从而使得Pythoner进行网络请求时,使用Requests可以轻而易举的完成浏览器可有的任何操作。

get请求

# 1、无参数实例
  
import requests
  
ret = requests.get('https://github.com/timeline.json')
  
print ret.url
print ret.text
#2、有参数实例
  
import requests
  
payload = {'key1': 'value1', 'key2': 'value2'}
ret = requests.get("http://httpbin.org/get", params=payload)
  
print ret.url
print ret.text
import requests
#
# requests.get(
#     url='http://www.baidu.com',
#     params={'k1':123,'k3': 456},
#     cookies={'c1':'123','c2':123},
#     headers={
#         'Referer':'https://github.com/',
#         'User-Agent':"Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"
#     }
# )

POST请求

# 1、基本POST实例
  
import requests
  
payload = {'key1': 'value1', 'key2': 'value2'}
ret = requests.post("http://httpbin.org/post", data=payload)
  
print ret.text
  
# 2、发送请求头和数据实例
  
import requests
import json
  
url = 'https://api.github.com/some/endpoint'
payload = {'some': 'data'}
headers = {'content-type': 'application/json'}
  
ret = requests.post(url, data=json.dumps(payload), headers=headers)
  
print ret.text
print ret.cookies
#
# requests.post(
#     data={'k1':'v1','k2':'v2'} #
# )
# 请求头
#     Content-Type:application/x-www-form-urlencoded
# 请求体
#     k1=v1&k2=v2

# Django
# def index(request):
#
#     requests.body # k1=v1&k2=v2
#     # 读取请求头中的Content-Type:application/x-www-form-urlencoded
#     # request.POST # {....}

其他请求

requests.get(url, params=None, **kwargs)
requests.post(url, data=None, json=None, **kwargs)
requests.put(url, data=None, **kwargs)
requests.head(url, **kwargs)
requests.delete(url, **kwargs)
requests.patch(url, data=None, **kwargs)
requests.options(url, **kwargs)
  
# 以上方法均是在此方法的基础上构建
requests.request(method, url, **kwargs)

参数列表

def request(method, url, **kwargs):
    """Constructs and sends a :class:`Request <Request>`.

    :param method: method for the new :class:`Request` object.
    :param url: URL for the new :class:`Request` object.
    :param params: (optional) Dictionary or bytes to be sent in the query string for the :class:`Request`.
    :param data: (optional) Dictionary, bytes, or file-like object to send in the body of the :class:`Request`.
    :param json: (optional) json data to send in the body of the :class:`Request`.
    :param headers: (optional) Dictionary of HTTP Headers to send with the :class:`Request`.
    :param cookies: (optional) Dict or CookieJar object to send with the :class:`Request`.
    :param files: (optional) Dictionary of ``'name': file-like-objects`` (or ``{'name': file-tuple}``) for multipart encoding upload.
        ``file-tuple`` can be a 2-tuple ``('filename', fileobj)``, 3-tuple ``('filename', fileobj, 'content_type')``
        or a 4-tuple ``('filename', fileobj, 'content_type', custom_headers)``, where ``'content-type'`` is a string
        defining the content type of the given file and ``custom_headers`` a dict-like object containing additional headers
        to add for the file.
    :param auth: (optional) Auth tuple to enable Basic/Digest/Custom HTTP Auth.
    :param timeout: (optional) How long to wait for the server to send data
        before giving up, as a float, or a :ref:`(connect timeout, read
        timeout) <timeouts>` tuple.
    :type timeout: float or tuple
    :param allow_redirects: (optional) Boolean. Set to True if POST/PUT/DELETE redirect following is allowed.
    :type allow_redirects: bool
    :param proxies: (optional) Dictionary mapping protocol to the URL of the proxy.
    :param verify: (optional) whether the SSL cert will be verified. A CA_BUNDLE path can also be provided. Defaults to ``True``.
    :param stream: (optional) if ``False``, the response content will be immediately downloaded.
    :param cert: (optional) if String, path to ssl client cert file (.pem). If Tuple, ('cert', 'key') pair.
    :return: :class:`Response <Response>` object
    :rtype: requests.Response

    Usage::

      >>> import requests
      >>> req = requests.request('GET', 'http://httpbin.org/get')
      <Response [200]>
    """

使用展示

def param_method_url():
    # requests.request(method='get', url='http://127.0.0.1:8000/test/')
    # requests.request(method='post', url='http://127.0.0.1:8000/test/')
    pass


def param_param():
    # - 可以是字典
    # - 可以是字符串
    # - 可以是字节(ascii编码以内)

    # requests.request(method='get',
    # url='http://127.0.0.1:8000/test/',
    # params={'k1': 'v1', 'k2': '水电费'})

    # requests.request(method='get',
    # url='http://127.0.0.1:8000/test/',
    # params="k1=v1&k2=水电费&k3=v3&k3=vv3")

    # requests.request(method='get',
    # url='http://127.0.0.1:8000/test/',
    # params=bytes("k1=v1&k2=k2&k3=v3&k3=vv3", encoding='utf8'))

    # 错误
    # requests.request(method='get',
    # url='http://127.0.0.1:8000/test/',
    # params=bytes("k1=v1&k2=水电费&k3=v3&k3=vv3", encoding='utf8'))
    pass


def param_data():
    # 可以是字典
    # 可以是字符串
    # 可以是字节
    # 可以是文件对象

    # requests.request(method='POST',
    # url='http://127.0.0.1:8000/test/',
    # data={'k1': 'v1', 'k2': '水电费'})

    # requests.request(method='POST',
    # url='http://127.0.0.1:8000/test/',
    # data="k1=v1; k2=v2; k3=v3; k3=v4"
    # )

    # requests.request(method='POST',
    # url='http://127.0.0.1:8000/test/',
    # data="k1=v1;k2=v2;k3=v3;k3=v4",
    # headers={'Content-Type': 'application/x-www-form-urlencoded'}
    # )

    # requests.request(method='POST',
    # url='http://127.0.0.1:8000/test/',
    # data=open('data_file.py', mode='r', encoding='utf-8'), # 文件内容是:k1=v1;k2=v2;k3=v3;k3=v4
    # headers={'Content-Type': 'application/x-www-form-urlencoded'}
    # )
    pass


def param_json():
    # 将json中对应的数据进行序列化成一个字符串,json.dumps(...)
    # 然后发送到服务器端的body中,并且Content-Type是 {'Content-Type': 'application/json'}
    requests.request(method='POST',
                     url='http://127.0.0.1:8000/test/',
                     json={'k1': 'v1', 'k2': '水电费'})


def param_headers():
    # 发送请求头到服务器端
    requests.request(method='POST',
                     url='http://127.0.0.1:8000/test/',
                     json={'k1': 'v1', 'k2': '水电费'},
                     headers={'Content-Type': 'application/x-www-form-urlencoded'}
                     )


def param_cookies():
    # 发送Cookie到服务器端
    requests.request(method='POST',
                     url='http://127.0.0.1:8000/test/',
                     data={'k1': 'v1', 'k2': 'v2'},
                     cookies={'cook1': 'value1'},
                     )
    # 也可以使用CookieJar(字典形式就是在此基础上封装)
    from http.cookiejar import CookieJar
    from http.cookiejar import Cookie

    obj = CookieJar()
    obj.set_cookie(Cookie(version=0, name='c1', value='v1', port=None, domain='', path='/', secure=False, expires=None,
                          discard=True, comment=None, comment_url=None, rest={'HttpOnly': None}, rfc2109=False,
                          port_specified=False, domain_specified=False, domain_initial_dot=False, path_specified=False)
                   )
    requests.request(method='POST',
                     url='http://127.0.0.1:8000/test/',
                     data={'k1': 'v1', 'k2': 'v2'},
                     cookies=obj)


def param_files():
    # 发送文件
    # file_dict = {
    # 'f1': open('readme', 'rb')
    # }
    # requests.request(method='POST',
    # url='http://127.0.0.1:8000/test/',
    # files=file_dict)

    # 发送文件,定制文件名
    # file_dict = {
    # 'f1': ('test.txt', open('readme', 'rb'))
    # }
    # requests.request(method='POST',
    # url='http://127.0.0.1:8000/test/',
    # files=file_dict)

    # 发送文件,定制文件名
    # file_dict = {
    # 'f1': ('test.txt', "hahsfaksfa9kasdjflaksdjf")
    # }
    # requests.request(method='POST',
    # url='http://127.0.0.1:8000/test/',
    # files=file_dict)

    # 发送文件,定制文件名
    # file_dict = {
    #     'f1': ('test.txt', "hahsfaksfa9kasdjflaksdjf", 'application/text', {'k1': '0'})
    # }
    # requests.request(method='POST',
    #                  url='http://127.0.0.1:8000/test/',
    #                  files=file_dict)

    pass


#页面打开后弹出登录页面,无法查看源文件,需要使用auth参数,真实情况比如路由器管理界面,zookeeper管理界面等等
def param_auth():
    from requests.auth import HTTPBasicAuth, HTTPDigestAuth

    ret = requests.get('https://api.github.com/user', auth=HTTPBasicAuth('usernamesss', 'passwdaxxxx'))
    print(ret.text)

    # ret = requests.get('http://192.168.1.1',
    # auth=HTTPBasicAuth('admin', 'admin'))
    # ret.encoding = 'gbk'
    # print(ret.text)

    # ret = requests.get('http://httpbin.org/digest-auth/auth/user/pass', auth=HTTPDigestAuth('user', 'pass'))
    # print(ret)
    #
    
    # auth
# def param_auth():
#     from requests.auth import HTTPBasicAuth, HTTPDigestAuth
#     #
#     # ret = requests.get('https://api.github.com/user', auth=HTTPBasicAuth('wupeiqi', 'sdfasdfasdf'))
#     # print(ret.text)
#
#     ret = requests.get('http://192.168.1.1',auth=HTTPBasicAuth('admin', 'admin'))
#     ret.encoding = 'gbk'
#     print(ret.text)
#     # 请求头中 Authorization:加密(用户和密码)
#
#     ret = requests.get('http://httpbin.org/digest-auth/auth/user/pass', auth=HTTPDigestAuth('user', 'pass'))
#     print(ret.text)
#
# param_auth()
    
######################    


def param_timeout():
     #timeout定义了两个时间,连接时间,发送时间,本例连接/发送时间均为1秒超时
    # ret = requests.get('http://google.com/', timeout=1)
    # print(ret)

    #连接时间5秒,发送时间1秒
    # ret = requests.get('http://google.com/', timeout=(5, 1))
    # print(ret)
    pass


#如果你使用的是GET或OPTIONS,那么你可以通过 allow_redirects 参数禁用重定向处理
def param_allow_redirects():
    ret = requests.get('http://127.0.0.1:8000/test/', allow_redirects=False)
    print(ret.text)


#代理服务器
def param_proxies():
    # proxies = {
    # "http": "61.172.249.96:80",
    # "https": "http://61.185.219.126:3128",
    # }

    # proxies = {'http://10.20.1.128': 'http://10.10.1.10:5323'}

    # ret = requests.get("http://www.proxy360.cn/Proxy", proxies=proxies)
    # print(ret.headers)


    # from requests.auth import HTTPProxyAuth
    #
    # proxyDict = {
    # 'http': '77.75.105.165',
    # 'https': '77.75.105.165'
    # }
    # auth = HTTPProxyAuth('username', 'mypassword')
    #
    # r = requests.get("http://www.google.com", proxies=proxyDict, auth=auth)
    # print(r.text)

    pass


#如果下载文件很大,开启stream流,减小内存压力
def param_stream():
    ret = requests.get('http://127.0.0.1:8000/test/', stream=True)
    print(ret.content)
    ret.close()

    # from contextlib import closing
    # with closing(requests.get('http://httpbin.org/get', stream=True)) as r:
    # # 在此处理响应。
    # for i in r.iter_content():
    # print(i)


#设置session
def requests_session():
    import requests

    session = requests.Session()

    ### 1、首先登陆任何页面,获取cookie

    i1 = session.get(url="http://dig.chouti.com/help/service")

    ### 2、用户登陆,携带上一次的cookie,后台对cookie中的 gpsd 进行授权
    i2 = session.post(
        url="http://dig.chouti.com/login",
        data={
            'phone': "8615131255089",
            'password': "xxxxxx",
            'oneMonth': ""
        }
    )

    i3 = session.post(
        url="http://dig.chouti.com/link/vote?linksId=8589623",
    )
    print(i3.text)

最常用总结

response = requests.get(url='xx',cookies={...})
response.text
#将cookies序列化为字典类型的对象
response.cookies.get_dict()
    
response = requests.post(url='xx',cookies={...},data={})
response.text
response.cookies.get_dict()

BeautifulSoup

BeautifulSoup用于接收一个HTML或XML字符串,然后将其进行格式化,之后遍可以使用他提供的方法进行快速查找指定元素,从而使得在HTML或XML中查找指定元素变得简单。简化了pythoner编写正则的流程,省心省事儿。

安装:

pip3 install beautifulsoup4

看下用例:

from bs4 import BeautifulSoup
 
html_doc = """
<html><head><title>The Dormouse's story</title></head>
<body>
asdf
    <div class="title">
        <b>The Dormouse's story总共</b>
        <h1>f</h1>
    </div>
<div class="story">Once upon a time there were three little sisters; and their names were
    <a  class="sister0" id="link1">Els<span>f</span>ie</a>,
    <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
    <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</div>
ad<br/>sf
<p class="story">...</p>
</body>
</html>
"""
 
soup = BeautifulSoup(html_doc, features="lxml")
# 找到第一个a标签
tag1 = soup.find(name='a')
# 找到所有的a标签
tag2 = soup.find_all(name='a')
# 找到id=link2的标签
tag3 = soup.select('#link2')

使用例子2:

from bs4 import BeautifulSoup
 
html_doc = """
<html><head><title>The Dormouse's story</title></head>
<body>
    ...
</body>
</html>
"""
 
soup = BeautifulSoup(html_doc, features="lxml")

features参数

被不同的TreeBuliders使用的解析器有以下三种:

  1. lxml
  2. html5lib
  3. html.parser

使用lxml/html5lib 需要安装额外的库.

当创建一个BeautifulSoup对象时,TreeBuliders类被用于创建HTML/XML树,默认的是使用HTML TreeBuliders对象,使用HTML解析器。如果我们想把输入当做XML做解析,我们必须明确的使用features参数在BeatufulSoup对象构造器中进行说明。指定features参数,我们能够选择最合适的TreeBuliders来满足我们的需求。

参数说明

name,标签名称

# tag = soup.find('a')
# name = tag.name # 获取
# print(name)
# tag.name = 'span' # 设置
# print(soup)

attr,标签属性

# tag = soup.find('a')
# attrs = tag.attrs    # 获取
# print(attrs)
# tag.attrs = {'ik':123} # 设置
# tag.attrs['id'] = 'iiiii' # 设置
# print(soup)

这里需要注意的是,如果使用attr设置属性,是重置所有的属性,并设置为指定的,如果标签有多个属性,使用attr设置后,会删除所有的属性并设置为指定属性.

children,所有子标签

只找第一层

# body = soup.find('body')
# v = body.children

descendants,子子孙孙标签

# body = soup.find('body')
# v = body.descendants

clear,将标签的所有子标签全部清空(保留标签名)

# tag = soup.find('body')
# tag.clear()
# print(soup)

decompose,递归的删除所有的标签

# body = soup.find('body')
# body.decompose()
# print(soup)

extract,递归的删除所有的标签,并获取删除的标签,类似列表里的pop

# body = soup.find('body')
# v = body.extract()
# print(soup)

decode,转换为字符串(含当前标签);decode_contents(不含当前标签)

# body = soup.find('body')
# v = body.decode()
# v = body.decode_contents()
# print(v)

encode,转换为字节(含当前标签);encode_contents(不含当前标签)

# body = soup.find('body')
# v = body.encode()
# v = body.encode_contents()
# print(v)

find,获取匹配的第一个标签

# tag = soup.find('a')
# print(tag)
# tag = soup.find(name='a', attrs={'class': 'sister'}, recursive=True, text='Lacie')
# tag = soup.find(name='a', class_='sister', recursive=True, text='Lacie')
# print(tag)

find_all,获取匹配的所有标签,结果是个列表

# tags = soup.find_all('a')
# print(tags)
 
# tags = soup.find_all('a',limit=1)
# print(tags)
 
# tags = soup.find_all(name='a', attrs={'class': 'sister'}, recursive=True, text='Lacie')
# # tags = soup.find(name='a', class_='sister', recursive=True, text='Lacie')
# print(tags)
 
 
# ####### 列表 #######
# v = soup.find_all(name=['a','div'])
# print(v)
 
# v = soup.find_all(class_=['sister0', 'sister'])
# print(v)
 
# v = soup.find_all(text=['Tillie'])
# print(v, type(v[0]))
 
 
# v = soup.find_all(id=['link1','link2'])
# print(v)
 
# v = soup.find_all(href=['link1','link2'])
# print(v)
 
# ####### 正则 #######
import re
# rep = re.compile('p')
# rep = re.compile('^p')
# v = soup.find_all(name=rep)
# print(v)
 
# rep = re.compile('sister.*')
# v = soup.find_all(class_=rep)
# print(v)
 
# rep = re.compile('http://www.oldboy.com/static/.*')
# v = soup.find_all(href=rep)
# print(v)
 
# ####### 方法筛选 #######
# def func(tag):
# return tag.has_attr('class') and tag.has_attr('id')
# v = soup.find_all(name=func)
# print(v)
 
 
# ## get,获取标签属性
# tag = soup.find('a')
# v = tag.get('id')
# print(v)

has_attr,检查标签是否具有该属性

# tag = soup.find('a')
# v = tag.has_attr('id')
# print(v)

get_text,获取标签内部文本内容

# tag = soup.find('a')
# v = tag.get_text('id')
# print(v)
#也可用text获取
#v = tag.text

index,检查标签在某标签中的索引位置

# tag = soup.find('body')
# v = tag.index(tag.find('div'))
# print(v)
 
# tag = soup.find('body')
# for i,v in enumerate(tag):
# print(i,v)

is_empty_element,是否是空标签(是否可以是空)或者自闭合标签

判断是否是如下自闭合标签:'br' , 'hr', 'input', 'img', 'meta','spacer', 'link', 'frame', 'base'

# tag = soup.find('br')
# v = tag.is_empty_element
# print(v)

当前的关联标签

# soup.next
# soup.next_element
# soup.next_elements
# soup.next_sibling
# soup.next_siblings
 
#
# tag.previous
# tag.previous_element
# tag.previous_elements
# tag.previous_sibling
# tag.previous_siblings
 
#
# tag.parent
# tag.parents

查找某标签的关联标签

# tag.find_next(...)
# tag.find_all_next(...)
# tag.find_next_sibling(...)
# tag.find_next_siblings(...)
 
# tag.find_previous(...)
# tag.find_all_previous(...)
# tag.find_previous_sibling(...)
# tag.find_previous_siblings(...)
 
# tag.find_parent(...)
# tag.find_parents(...)
 
# 参数同find_all

select,select_one, CSS选择器

soup.select("title")
 
soup.select("p nth-of-type(3)")
 
soup.select("body a")
 
soup.select("html head title")
 
tag = soup.select("span,a")
 
soup.select("head > title")
 
soup.select("p > a")
 
soup.select("p > a:nth-of-type(2)")
 
soup.select("p > #link1")
 
soup.select("body > a")
 
soup.select("#link1 ~ .sister")
 
soup.select("#link1 + .sister")
 
soup.select(".sister")
 
soup.select("[class~=sister]")
 
soup.select("#link1")
 
soup.select("a#link2")
 
soup.select('a[href]')
 
soup.select('a[href="http://example.com/elsie"]')
 
soup.select('a[href^="http://example.com/"]')
 
soup.select('a[href$="tillie"]')
 
soup.select('a[href*=".com/el"]')
 
 
from bs4.element import Tag
 
def default_candidate_generator(tag):
    for child in tag.descendants:
        if not isinstance(child, Tag):
            continue
        if not child.has_attr('href'):
            continue
        yield child
 
tags = soup.find('body').select("a", _candidate_generator=default_candidate_generator)
print(type(tags), tags)
 
from bs4.element import Tag
def default_candidate_generator(tag):
    for child in tag.descendants:
        if not isinstance(child, Tag):
            continue
        if not child.has_attr('href'):
            continue
        yield child
 
tags = soup.find('body').select("a", _candidate_generator=default_candidate_generator, limit=1)
print(type(tags), tags)

标签的内容

# tag = soup.find('span')
# print(tag.string)          # 获取
# tag.string = 'new content' # 设置
# print(soup)
 
# tag = soup.find('body')
# print(tag.string)
# tag.string = 'xxx'
# print(soup)
 
# tag = soup.find('body')
# v = tag.stripped_strings  # 递归内部获取所有标签的文本
# print(v)

append在当前标签内部追加一个标签

# tag = soup.find('body')
# tag.append(soup.find('a'))
# print(soup)
#
# from bs4.element import Tag
# obj = Tag(name='i',attrs={'id': 'it'})
# obj.string = '我是一个新来的'
# tag = soup.find('body')
# tag.append(obj)
# print(soup)

insert在当前标签内部指定位置插入一个标签

# from bs4.element import Tag
# obj = Tag(name='i', attrs={'id': 'it'})
# obj.string = '我是一个新来的'
# tag = soup.find('body')
# tag.insert(2, obj)
# print(soup)

insert_after,insert_before 在当前标签后面或前面插入

# from bs4.element import Tag
# obj = Tag(name='i', attrs={'id': 'it'})
# obj.string = '我是一个新来的'
# tag = soup.find('body')
# # tag.insert_before(obj)
# tag.insert_after(obj)
# print(soup)

replace_with 在当前标签替换为指定标签

# from bs4.element import Tag
# obj = Tag(name='i', attrs={'id': 'it'})
# obj.string = '我是一个新来的'
# tag = soup.find('div')
# tag.replace_with(obj)
# print(soup)

创建标签之间的关系

# tag = soup.find('div')
# a = soup.find('a')
# tag.setup(previous_sibling=a)
# print(tag.previous_sibling)

wrap,将指定标签把当前标签包裹起来

# from bs4.element import Tag
# obj1 = Tag(name='div', attrs={'id': 'it'})
# obj1.string = '我是一个新来的'
#
# tag = soup.find('a')
# v = tag.wrap(obj1)
# print(soup)
 
# tag = soup.find('a')
# v = tag.wrap(soup.find('p'))
# print(soup)

unwrap,去掉当前标签,将保留其包裹的标签

# tag = soup.find('a')
# v = tag.unwrap()
# print(soup)
from bs4 import BeautifulSoup

html_doc = """
<html><head><title>The Dormouse's story</title></head>
<body>
    hello,world!

    <a>百度<div>
        <span>s1</span>
        <div>asdf</div>
    </div></a>
</body>
</html>
"""

soup = BeautifulSoup(html_doc, features="html.parser")

# body_obj = soup.find(name='body')
from bs4.element import Tag,NavigableString
# for item in body_obj.children:
#     print('-->',item,type(item))

# div = soup.find(name='div')
# a = soup.find('a')
# div.wrap(a)
# print(soup)

a = soup.find('a')
a.unwrap()
print(soup)

自动登录实例

博客园

#!/usr/bin/env python
# -*- coding:utf-8 -*-
import re
import json
import base64

import rsa
import requests


def js_encrypt(text):
    b64der = 'MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCp0wHYbg/NOPO3nzMD3dndwS0MccuMeXCHgVlGOoYyFwLdS24Im2e7YyhB0wrUsyYf0/nhzCzBK8ZC9eCWqd0aHbdgOQT6CuFQBMjbyGYvlVYU2ZP7kG9Ft6YV6oc9ambuO7nPZh+bvXH0zDKfi02prknrScAKC0XhadTHT3Al0QIDAQAB'
    der = base64.standard_b64decode(b64der)

    pk = rsa.PublicKey.load_pkcs1_openssl_der(der)
    v1 = rsa.encrypt(bytes(text, 'utf8'), pk)
    value = base64.encodebytes(v1).replace(b'\n', b'')
    value = value.decode('utf8')

    return value


session = requests.Session()

i1 = session.get('https://passport.cnblogs.com/user/signin')
rep = re.compile("'VerificationToken': '(.*)'")
v = re.search(rep, i1.text)
verification_token = v.group(1)

form_data = {
    'input1': js_encrypt('wptawy'),
    'input2': js_encrypt('asdfasdf'),
    'remember': False
}

i2 = session.post(url='https://passport.cnblogs.com/user/signin',
                  data=json.dumps(form_data),
                  headers={
                      'Content-Type': 'application/json; charset=UTF-8',
                      'X-Requested-With': 'XMLHttpRequest',
                      'VerificationToken': verification_token}
                  )

i3 = session.get(url='https://i.cnblogs.com/EditDiary.aspx')

print(i3.text)

知乎

#!/usr/bin/env python
# -*- coding:utf-8 -*-
import time

import requests
from bs4 import BeautifulSoup

session = requests.Session()

i1 = session.get(
    url='https://www.zhihu.com/#signin',
    headers={
        'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.98 Safari/537.36',
    }
)

soup1 = BeautifulSoup(i1.text, 'lxml')
xsrf_tag = soup1.find(name='input', attrs={'name': '_xsrf'})
xsrf = xsrf_tag.get('value')

current_time = time.time()
i2 = session.get(
    url='https://www.zhihu.com/captcha.gif',
    params={'r': current_time, 'type': 'login'},
    headers={
        'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.98 Safari/537.36',
    })

with open('zhihu.gif', 'wb') as f:
    f.write(i2.content)

captcha = input('请打开zhihu.gif文件,查看并输入验证码:')
form_data = {
    "_xsrf": xsrf,
    'password': 'xxooxxoo',
    "captcha": 'captcha',
    'email': '[email protected]'
}
i3 = session.post(
    url='https://www.zhihu.com/login/email',
    data=form_data,
    headers={
        'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.98 Safari/537.36',
    }
)

i4 = session.get(
    url='https://www.zhihu.com/settings/profile',
    headers={
        'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.98 Safari/537.36',
    }
)

soup4 = BeautifulSoup(i4.text, 'lxml')
tag = soup4.find(id='rename-section')
nick_name = tag.find('span',class_='name').string
print(nick_name)

github

#!/usr/bin/env python
# -*- coding:utf-8 -*-

import requests
from bs4 import BeautifulSoup

# ############## 方式一 ##############
#
# # 1. 访问登陆页面,获取 authenticity_token
# i1 = requests.get('https://github.com/login')
# soup1 = BeautifulSoup(i1.text, features='lxml')
# tag = soup1.find(name='input', attrs={'name': 'authenticity_token'})
# authenticity_token = tag.get('value')
# c1 = i1.cookies.get_dict()
# i1.close()
#
# # 1. 携带authenticity_token和用户名密码等信息,发送用户验证
# form_data = {
# "authenticity_token": authenticity_token,
#     "utf8": "",
#     "commit": "Sign in",
#     "login": "[email protected]",
#     'password': 'xxoo'
# }
#
# i2 = requests.post('https://github.com/session', data=form_data, cookies=c1)
# c2 = i2.cookies.get_dict()
# c1.update(c2)
# i3 = requests.get('https://github.com/settings/repositories', cookies=c1)
#
# soup3 = BeautifulSoup(i3.text, features='lxml')
# list_group = soup3.find(name='div', class_='listgroup')
#
# from bs4.element import Tag
#
# for child in list_group.children:
#     if isinstance(child, Tag):
#         project_tag = child.find(name='a', class_='mr-1')
#         size_tag = child.find(name='small')
#         temp = "项目:%s(%s); 项目路径:%s" % (project_tag.get('href'), size_tag.string, project_tag.string, )
#         print(temp)



# ############## 方式二 ##############
# session = requests.Session()
# # 1. 访问登陆页面,获取 authenticity_token
# i1 = session.get('https://github.com/login')
# soup1 = BeautifulSoup(i1.text, features='lxml')
# tag = soup1.find(name='input', attrs={'name': 'authenticity_token'})
# authenticity_token = tag.get('value')
# c1 = i1.cookies.get_dict()
# i1.close()
#
# # 1. 携带authenticity_token和用户名密码等信息,发送用户验证
# form_data = {
#     "authenticity_token": authenticity_token,
#     "utf8": "",
#     "commit": "Sign in",
#     "login": "[email protected]",
#     'password': 'xxoo'
# }
#
# i2 = session.post('https://github.com/session', data=form_data)
# c2 = i2.cookies.get_dict()
# c1.update(c2)
# i3 = session.get('https://github.com/settings/repositories')
#
# soup3 = BeautifulSoup(i3.text, features='lxml')
# list_group = soup3.find(name='div', class_='listgroup')
#
# from bs4.element import Tag
#
# for child in list_group.children:
#     if isinstance(child, Tag):
#         project_tag = child.find(name='a', class_='mr-1')
#         size_tag = child.find(name='small')
#         temp = "项目:%s(%s); 项目路径:%s" % (project_tag.get('href'), size_tag.string, project_tag.string, )
#         print(temp)

抽屉

#!/usr/bin/env python
# -*- coding:utf-8 -*-
import requests


# ############## 方式一 ##############
"""
# ## 1、首先登陆任何页面,获取cookie
i1 = requests.get(url="http://dig.chouti.com/help/service")
i1_cookies = i1.cookies.get_dict()

# ## 2、用户登陆,携带上一次的cookie,后台对cookie中的 gpsd 进行授权
i2 = requests.post(
    url="http://dig.chouti.com/login",
    data={
        'phone': "8615131255089",
        'password': "xxooxxoo",
        'oneMonth': ""
    },
    cookies=i1_cookies
)

# ## 3、点赞(只需要携带已经被授权的gpsd即可)
gpsd = i1_cookies['gpsd']
i3 = requests.post(
    url="http://dig.chouti.com/link/vote?linksId=8589523",
    cookies={'gpsd': gpsd}
)

print(i3.text)
"""


# ############## 方式二 ##############
"""
import requests

session = requests.Session()
i1 = session.get(url="http://dig.chouti.com/help/service")
i2 = session.post(
    url="http://dig.chouti.com/login",
    data={
        'phone': "8615131255089",
        'password': "xxooxxoo",
        'oneMonth': ""
    }
)
i3 = session.post(
    url="http://dig.chouti.com/link/vote?linksId=8589523"
)
print(i3.text)

"""

转载于:https://www.cnblogs.com/ccorz/p/pa-chong-xiang-guan.html