50行代码帮你搞定!手把手教你把视频弹幕变成想要的形状
程序员文章站
2022-03-20 15:38:51
前言 B站作为一个弹幕视频网站,有着所谓的弹幕文化,那么接下来我们看看,一个视频中出现最多的弹幕是什么? 知识点: 1. 爬虫基本流程 2. 正则 3. requests 4. jieba 5. csv 6. wordcloud 开发环境: Python 3.6 Pycharm Python部分 步 ......
前言
b站作为一个弹幕视频网站,有着所谓的弹幕文化,那么接下来我们看看,一个视频中出现最多的弹幕是什么?
知识点:
1. 爬虫基本流程
2. 正则
3. requests
4. jieba
5. csv
6. wordcloud
开发环境:
python 3.6
pycharm
python部分
步骤:
import re
import requests
import csv
1、确定爬取的url路径,headers参数
代码:
url = 'https://api.bilibili.com/x/v1/dm/list.so?oid=186803402'
headers = {'user-agent': 'mozilla/5.0 (windows nt 10.0; wow64) applewebkit/537.36 (khtml, like gecko) chrome/78.0.3904.108 safari/537.36'}
2、模拟浏览器发送请求,获取相应内容
headers = {'user-agent': 'mozilla/5.0 (windows nt 10.0; wow64) applewebkit/537.36 (khtml, like gecko) chrome/78.0.3904.108 safari/537.36'}
resp = requests.get(url)
#乱码
print(resp.content.decode('utf-8'))
3、解析网页 提取数据
#按照要求提取网页数据
res = re.compile('<d.*?>(.*?)</d>')
danmu = re.findall(res,html_doc)
print(danmu)
4、保存数据
for i in danmu:
with open('c:/users/administrator/desktop/b站弹幕.csv','a',newline='',encoding='utf-8-sig') as f:
writer = csv.writer(f)
danmu = []
danmu.append(i)
writer.writerow(danmu)
显示数据
导入词云制作库wordcloud和中文分词库jieba
import jieba
import wordcloud
导入imageio库中的imread函数,并用这个函数读取本地图片,作为词云形状图片
import imageio
mk = imageio.imread(r"拳头.png")
headers = {
"user-agent": "mozilla/5.0 (windows nt 10.0; win64; x64) applewebkit/537.36 (khtml, like gecko) chrome/71.0.3578.98 safari/537.36",
}
response = requests.get("https://api.bilibili.com/x/v1/dm/list.so?oid=186803402", headers=headers)
# print(response.text)
html_doc = response.content.decode('utf-8')
# soup = beautifulsoup(html_doc,'lxml')
format = re.compile("<d.*?>(.*?)</d>")
danmu = format.findall(html_doc)
for i in danmu:
with open('c:/users/mark/desktop/b站弹幕.csv', "a", newline='', encoding='utf-8-sig') as csvfile:
writer = csv.writer(csvfile)
danmu = []
danmu.append(i)
writer.writerow(danmu)
构建并配置词云对象w,注意要加stopwords集合参数,将不想展示在词云中的词放在stopwords集合里,这里去掉“曹操”和“孔明”两个词
w = wordcloud.wordcloud(width=1000,
height=700,
background_color='white',
font_path='msyh.ttc',
mask=mk,
scale=15,
stopwords={' '},
contour_width=5,
contour_color='red')
对来自外部文件的文本进行中文分词,得到string
f = open('c:/users/mark/desktop/b站弹幕.csv', encoding='utf-8')
txt = f.read()
txtlist = jieba.lcut(txt)
string = " ".join(txtlist)
将string变量传入w的generate()方法,给词云输入文字
w.generate(string)
将词云图片导出到当前文件夹
w.to_file('c:/users/mark/desktop/output2-threekingdoms.png')
效果如下: