Python爬虫采集微博视频数据
程序员文章站
2022-03-09 23:06:15
目录前言知识点开发环境爬虫原理 案例实现前言随时随地发现新鲜事!微博带你欣赏世界上每一个精彩瞬间,了解每一个幕后故事。分享你想表达的,让全世界都能听到你的心声!今天我们通过python去采集微博当中好...
前言
随时随地发现新鲜事!微博带你欣赏世界上每一个精彩瞬间,了解每一个幕后故事。分享你想表达的,让全世界都能听到你的心声!今天我们通过python去采集微博当中好看的视频!
没错,今天的目标是微博数据采集,爬的是那些好看的小姐姐视频
知识点
requests
pprint
开发环境
版 本:python 3.8
-编辑器:pycharm 2021.2
爬虫原理
作用:批量获取互联网数据(文本, 图片, 音频, 视频)
本质:一次次的请求与响应
案例实现
1. 导入所需模块
import requests import pprint
2. 找到目标网址
打开开发者工具,选中fetch/xhr,选中数据所在的标签,找到目标所在url
https://www.weibo.com/tv/api/component?page=/tv/channel/4379160563414111/editor
3. 发送网络请求
headers = { 'cookie': '', 'referer': 'https://weibo.com/tv/channel/4379160563414111/editor', 'user-agent': '', } data = { 'data': '{"component_channel_editor":{"cid":"4379160563414111","count":9}}' } url = 'https://www.weibo.com/tv/api/component?page=/tv/channel/4379160563414111/editor' json_data = requests.post(url=url, headers=headers, data=data).json()
4. 获取数据
json_data_2 = requests.post(url=url_1, headers=headers, data=data_1).json()
5. 筛选数据
dict_urls = json_data_2['data']['component_play_playinfo']['urls'] video_url = "https:" + dict_urls[list(dict_urls.keys())[0]] print(title + "\t" + video_url)
6. 保存数据
video_data = requests.get(video_url).content with open(f'video\\{title}.mp4', mode='wb') as f: f.write(video_data) print(title, "爬取成功................")
完整代码
import requests import pprint headers = { 'cookie': '添加自己的', 'referer': 'https://weibo.com/tv/channel/4379160563414111/editor', 'user-agent': '', } data = { 'data': '{"component_channel_editor":{"cid":"4379160563414111","count":9}}' } url = 'https://www.weibo.com/tv/api/component?page=/tv/channel/4379160563414111/editor' json_data = requests.post(url=url, headers=headers, data=data).json() print(json_data) ccs_list = json_data['data']['component_channel_editor']['list'] next_cursor = json_data['data']['component_channel_editor']['next_cursor'] for ccs in ccs_list: oid = ccs['oid'] title = ccs['title'] data_1 = { 'data': '{"component_play_playinfo":{"oid":"' + oid + '"}}' } url_1 = 'https://weibo.com/tv/api/component?page=/tv/show/' + oid json_data_2 = requests.post(url=url_1, headers=headers, data=data_1).json() dict_urls = json_data_2['data']['component_play_playinfo']['urls'] video_url = "https:" + dict_urls[list(dict_urls.keys())[0]] print(title + "\t" + video_url) video_data = requests.get(video_url).content with open(f'video\\{title}.mp4', mode='wb') as f: f.write(video_data) print(title, "爬取成功................")
以上就是python爬虫采集微博视频数据的详细内容,更多关于python采集视频数据的资料请关注其它相关文章!