python爬虫 - requests库
程序员文章站
2022-07-14 11:19:14
...
requests简介
- 我们已经讲解了Python内置的urllib模块,用于访问网络资源。但是,它用起来比较麻烦,而且,缺少很多实用的高级功能。更好的方案是使用requests。它是一个Python第三方库,处理URL资源特别方便。
安装requests
- 如果安装了Anaconda,requests就已经可用了。否则,需要在命令行下通过pip安装:
$ pip install requests
requests_get
- 通过GET访问一个页面
import requests
#带参数的get请求
url = 'https://www.baidu.com/s?'
data = {
'wd':'中国'
}
header = {'User-Agent': 'Mozilla/5.0 (X11; U; Linux x86_64;'
' zh-CN; rv:1.9.2.10) Gecko/20100922'
' Ubuntu/10.10 (maverick) Firefox/3.6.10'
}
r = requests.get(url, headers=header, params=data)
#print(r.text)
#print(r.status_code)
#print(r.headers)
#print(r.url)
with open('Requests_file/zhongguo.html','wb') as fp:
fp.write(r.content)
requests_cookie
import requests
#创建一个会话
s = requests.Session()
post_url = 'http://www.renren.com/ajaxLogin/login?1=1&uniqueTimestamp=2019341636849 HTTP/1.1'
header = {'User-Agent': 'Mozilla/5.0 (X11; U; Linux x86_64;'
' zh-CN; rv:1.9.2.10) Gecko/20100922'
' Ubuntu/10.10 (maverick) Firefox/3.6.10'
}
formdata = {
'email':'17320015926',
'password':'123456',
'icode':'',
'origURL':'http://www.renren.com/home',
'domain':'renren.com',
'key_id':'1',
'captcha_type':'web_login',
'f':'https%3A%2F%2Fwww.baidu.com%2Flink%3Furl%3D_4eOtFSXfVrfNtOlNBgoyTjnVMk2CRdO44Rf-7VG4AG%26wd%3D%'
'26eqid%3D8b5865030001e71f000000035caefb80',
}
r = s.post(url=post_url, headers=header, data=formdata)
#print(r.text)
get_url = 'http://www.renren.com/969564068/profile'
r = s.get(url=get_url, headers=header)
print(r.text)
with open('renrenzhuyie.html', 'wb') as fp:
fp.write(r.content)
requests代理
import requests
url = 'https://www.baidu.com/s?ie=UTF-8&wd=ip'
proxies = {
'https':'203.42.227.113:8080'
}
header = {'User-Agent': 'Mozilla/5.0 (X11; U; Linux x86_64;'
' zh-CN; rv:1.9.2.10) Gecko/20100922'
' Ubuntu/10.10 (maverick) Firefox/3.6.10'
}
r = requests.get(url=url, headers=header, proxies=proxies)
with open('daili.html', 'wb') as fp:
fp.write(r.content)
requests_post
import requests
url = 'http://cn.bing.com/ttranslationlookup?&IG=D6F5982DA96A4F8E98B007A143DEEEF6&IID=translator.5038.3'
formdata = {
'from':'en',
'to':'zh-CHS',
'text':'pig',
}
header = {'User-Agent': 'Mozilla/5.0 (X11; U; Linux x86_64;'
' zh-CN; rv:1.9.2.10) Gecko/20100922'
' Ubuntu/10.10 (maverick) Firefox/3.6.10'
}
r = requests.post(url=url, headers=header, data=formdata)
print(r.json())
下一篇: Roberts算子边缘检测原理及实现
推荐阅读
-
Python爬虫使用selenium爬取qq群的成员信息(全自动实现自动登陆)
-
【Python必学】Python爬虫反爬策略你肯定不会吧?
-
python爬虫系列:三、URLError异常处理
-
用于业余项目的8个优秀Python库
-
python爬虫之自动登录与验证码识别
-
Ubuntu18.04一次性升级Python所有库的方法步骤
-
Python17之函数、类、模块、包、库
-
Python_WIN10系统中递归所有文件夹所有文件_移动所有文件到主目录(使用到的库:os + glob + shutil)
-
Python实现mysql数据库更新表数据接口的功能
-
Python基于Hypothesis测试库生成测试数据