python爬虫之requests库的使用详解
程序员文章站
2022-03-21 15:53:31
目录python爬虫—requests库的用法基本的get请求带参数的get请求:解析json使用代理获取cookie会话维持证书验证设置超时异常捕获异常处理总结python爬虫—requests库的...
python爬虫—requests库的用法
requests是python实现的简单易用的http库,使用起来比urllib简洁很多,requests 允许你发送 http/1.1 请求。指定 url并添加查询url字符串即可开始爬取网页信息等操作
因为是第三方库,所以使用前需要cmd安装
pip install requests
安装完成后import一下,正常则说明可以开始使用了
基本用法:
requests.get()
用于请求目标网站,类型是一个httpresponse类型
import requests response = requests.get('http://www.baidu.com') print(response.status_code) # 打印状态码 print(response.url) # 打印请求url print(response.headers) # 打印头信息 print(response.cookies) # 打印cookie信息 print(response.text) #以文本形式打印网页源码 print(response.content) #以字节流形式打印
以打印状态码为例,运行结果:
状态码:200,证明请求目标网站正常
若状态码为403一般是目标存有防火墙,触发了反爬策略被限制了ip
各种请求方式:
import requests requests.get('http://www.baidu.com') requests.post('http://www.baidu.com') requests.put('http://www.baidu.com') requests.delete('http://www.baidu.com') requests.head('http://www.baidu.com') requests.options('http://www.baidu.com')
基本的get请求
import requests response = requests.get('http://www.baidu.com') print(response.text)
带参数的get请求:
第一种直接将参数放在url内
import requests response = requests.get("https://www.crrcgo.cc/admin/crr_supplier.html?params=1") print(response.text)
另一种先将参数填写在data中,发起请求时将params参数指定为data
import requests data = { 'params': '1', } response = requests.get('https://www.crrcgo.cc/admin/crr_supplier.html?', params=data) print(response.text)
基本post请求:
import requests response = requests.post('http://baidu.com')
解析json
import requests response = requests.get('http://httpbin.org/get') print(response.text) print(response.json()) #response.json()方法同json.loads(response.text) print(type(response.json()))
简单保存一个二进制文件
import requests response = requests.get('http://img.ivsky.com/img/tupian/pre/201708/30/kekeersitao-002.jpg') b = response.content with open('f://fengjing.jpg','wb') as f: f.write(b)
为你的请求添加头信息
import requests heads = {} heads['user-agent'] = 'mozilla/5.0 ' \ '(macintosh; u; intel mac os x 10_6_8; en-us) applewebkit/534.50 ' \ '(khtml, like gecko) version/5.1 safari/534.50' response = requests.get('http://www.baidu.com',headers=headers)
此方法可以有效地避开防火墙的检测,隐藏自己身份
使用代理
同添加headers方法一样,代理参数也是一个dict这里使用requests库爬取了ip代理网站的ip与端口和类型。因为是免费的,使用的代理地址很快就失效了。
复制代码
import requests import re def get_html(url): proxy = { 'http': '120.25.253.234:812', 'https' '163.125.222.244:8123' } heads = {} heads['user-agent'] = 'mozilla/5.0 (windows nt 10.0; wow64) applewebkit/537.36 (khtml, like gecko) chrome/49.0.2623.221 safari/537.36 se 2.x metasr 1.0' req = requests.get(url, headers=heads,proxies=proxy) html = req.text return html def get_ipport(html): regex = r'<td data-title="ip">(.+)</td>' iplist = re.findall(regex, html) regex2 = '<td data-title="port">(.+)</td>' portlist = re.findall(regex2, html) regex3 = r'<td data-title="类型">(.+)</td>' typelist = re.findall(regex3, html) sumray = [] for i in iplist: for p in portlist: for t in typelist: pass pass a = t+','+i + ':' + p sumray.append(a) print('高匿代理') print(sumray) if __name__ == '__main__': url = 'http://www.baidu.com' get_ipport(get_html(url))
获取cookie
import requests response = requests.get('http://www.baidu.com') print(response.cookies) print(type(response.cookies)) for k,v in response.cookies.items(): print(k+':'+v)
会话维持
import requests session = requests.session() session.get('https://www.crrcgo.cc/admin/crr_supplier.html') response = session.get('https://www.crrcgo.cc/admin/') print(response.text)
证书验证设置
import requests from requests.packages import urllib3 urllib3.disable_warnings() #从urllib3中消除警告 response = requests.get('https://www.12306.cn',verify=false) #证书验证设为false print(response.status_code)
超时异常捕获
import requests from requests.exceptions import readtimeout try: res = requests.get('http://httpbin.org', timeout=0.1) print(res.status_code) except readtimeout: print(timeout)
异常处理
使用try…except来捕获异常
import requests from requests.exceptions import readtimeout,httperror,requestexception try: response = requests.get('http://www.baidu.com',timeout=0.5) print(response.status_code) except readtimeout: print('timeout') except httperror: print('httperror') except requestexception: print('reqerror')
总结
本篇文章就到这里了,希望能够给你带来帮助,也希望您能够多多关注的更多内容!