欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

python猫眼top数据解析画图

程序员文章站 2024-01-27 14:22:28
...

猫眼top100数据解析

这是继上篇爬取数据后的数据解析,且尝试使用更多种方法去抓取、存储数据。上篇链接为link

抓取数据方法介绍

1.利用正则表达式解析

def parse_one_page(html):
    pattern = '<dd>.*?board-index.*?">(\d+)</i>.*?data-src="(.*?)".*?/>.*?movie-item-info.*?title="(.*?)".*?star">' + \
              '(.*?)</p>.*?releasetime">(.*?)</p>.*?integer">(.*?)</i>.*?fraction">(\d+)</i>.*?</dd>'
    # re.S匹配任意字符,多行
    regex = re.compile(pattern, re.S)
    items = regex.findall(html)
    for item in items:
        yield {
            'index': item[0],
            'thumb': get_large_thumb(item[1]),
            'title': item[2],
            'actors': item[3].strip()[3:],
            'release_time': get_release_time(item[4].strip()[5:]),
            'area': get_release_area(item[4].strip()[5:]),
            'score': item[5] + item[6]
        }
        pass
    pass

2.使用lxml中Xpath路径解析

def parse_one_page2(html):
    parse = etree.HTML(html)
    items = parse.xpath("//*[@id='app']//div//dd")
    for item in items:
        yield{
            'index':item.xpath("./i/text()")[0],
            'thumb':get_large_thumb(str(item.xpath("./a/img[2]/@data-src")[0].strip())),
            'name':item.xpath("./a/@title")[0],
            'star':item.xpath(".//p[@class='star']/text()")[0].strip(),
            'time':get_release_time(item.xpath(".//p[@class='releasetime']/text()")[0].strip()[5:]),
            'area':get_release_area(item.xpath(".//p[@class='releasearea']/text()")[0].strip()[5:]),
            'score':item.xpath(".//p[@class='score']/i[1]/text()")[0]+\
                item.xpath(".//p[@class='score']/i[2]/text()")[0]
        }
        pass
    pass

此方法一般用于对规则性的信息的解析,是解析利器,也是爬虫信息抽取利器。

3.bs4的soup.select方法

def parse_one_page3(html):
    soup = BeautifulSoup(html,'lxml')
    items =range(10)

    for item in items:
        yield{
            'index':soup.select("dd i.board-index")[item].string,
            'thumb':get_large_thumb(soup.select("a > img.board-img")[item]['data-src']),
            'name':soup.select(".name a")[item].string,
            'star':soup.select(".star")[item].string.strip()[3:],
            'time':get_release_time(soup.select(".releasetime")[item].string.strip()[5:]),
            'area':get_release_area(soup.select(".releasearea")[item].string.strip()[5:]),
            'score':soup.select(".integer")[item].string+soup.select(".fraction")[item].string,
        }
        pass
    pass

用beautifulsoup + css选择器提取。

4.API接口函数 - find函数

def parse_one_page4(html):
    soup = BeautifulSoup(html, 'lxml')
    items = range(10)

    for item in items:
        yield {
            'index':soup.find_all(class_="board-index")[item].string,
            'thumb':get_large_thumb(soup.find_all(class_="board-img")[item].attrs['data-src']),
            'name':soup.find_all(name='p',attrs={'class':"name"})[item].string,
            'star':soup.find_all(name='p',attrs={'class':"star"})[item].string.strip()[3:],
            'time':get_release_time(soup.find_all(class_='releasetime')[item].string.strip()[5:]),
            'area':get_release_area(soup.find_all(class_='releasetime')[item].string.strip()[5:]),
            'score':soup.find_all(name='i',attrs={'class':"integer"})[item].string.strip() +
                    soup.find_all(name='i',attrs={'class':"fraction"})[item].string.strip()

        }
        pass
    pass

Beautifulsoup除了和css选择器搭配,还可以直接用它自带的find_all函数进行提取,如上所示。

2.存储方法介绍

1.字典格式存储,JSON串

def write_to_file(items):
    # a为追加的意思,utf_8_sig是使简体中文不乱码
     with open('save.csv','a',encoding='utf_8_sig')as f:
        f.write(json.dumps(items,ensure_ascii=False) + '\n')
        print('第%s部电影爬取完毕'% items["index"])
     pass
pass

2.格式存储

def write_to_file2(items):
    with open('save2.csv','a',encoding='utf_8_sig',newline='')as f:
        fieldnames = ['index','thumb','name','star','time','area','score']
        w = csv.DictWriter(f,fieldnames=fieldnames)
        w.writerow(items)
        pass
    pass

3.值存储

def write_to_file3(items):
   with open('save.csv', 'a', encoding='utf_8_sig', newline='')as f:

       w = csv.writer(f)
       w.writerow(items.values())

       pass
   pass

3.数据解析:可视化解析

以画出电影评分前十的柱状图为例。

1.前置工作导入所需库、所需数据及设置主题

import matplotlib.pyplot as plt
import pylab as pl
import pandas as pd


plt.rcParams['font.sans-serif'] = ['SimHei']
plt.rcParams['font.family']='sans-serif'
#解决符号'-'乱码问题
plt.rcParams['axes.unicode_minus'] = False

#,设置主题
plt.style.use('ggplot')
# 设置柱形图大小
fig = plt.figure(figsize=(8,5))
colors1 = '#6D6D6D'
#导入原始数据
cloumns = ['index','thumb','name','star','time','area','score']
df=pd.read_csv('save2.csv',encoding='utf-8',header=None,names=cloumns,index_col='index')

2.绘图

def annsis1():
    df_score= df.sort_values('score',ascending=False)# asc False降序,True升序: desc
    name1 = df_score.name[:10] #X轴坐标
    score1 = df_score.score[:10]#Y轴坐标

    plt.bar(range(10),score1,tick_label=name1) #绘制条形图,用range()能保持X轴顺序一致
    plt.ylim(9,10)
    plt.title("电影评分最高Top10",color=colors1)
    plt.xlabel('电影名称')
    plt.ylabel('评分')
    #标记数值
    for x,y in enumerate(list(score1)):
        plt.text(x,y+0.01,'%s' %round(y,1),ha='center',color=colors1)
        pass
    pl.xticks(rotation=270)#旋转270°
    plt.tight_layout() #去除空白v
    plt.show()
    pass

旋转270°是为了防止某些电影名称过长导致与其他电影名称重叠。

3.结果

python猫眼top数据解析画图