欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页  >  IT编程

读者网作者发布次数统计爬虫

程序员文章站 2022-03-07 15:33:12
第一种: 根据规律手工构造各个期刊的url 爬取两遍,第一遍爬完,去重(集合),并初始化结果集;第二遍爬取,在结果集里查询并计数 将结果集转化为列表,并按照列表中times关键字对字典(相当于数据集合)整体排序 写入结果到txt文件 第二种: 根据规律手工构造各个期刊的url 爬取一遍,用列表(容器 ......

第一种:

根据规律手工构造各个期刊的url

爬取两遍,第一遍爬完,去重(集合),并初始化结果集;第二遍爬取,在结果集里查询并计数

将结果集转化为列表,并按照列表中times关键字对字典(相当于数据集合)整体排序

写入结果到txt文件

from bs4 import beautifulsoup as be
import requests as req
import os

baseurl = "http://www.52duzhe.com/"

def do_soup(url):
    try:
        r = req.get(url,headers={'user-agent':'mozilla/5.0'})
        r.raise_for_status()
        r.encoding = r.apparent_encoding
        html = r.text
        soup = be(html,'lxml')
        return soup
    except:
        print("获取"+url+"失败")


def search_each_per(tag):
    aut_set = set()
    a_list = []
    for i in range(2010,2018):
        for j in range(1,25):
            if j<10:
                extraurl = str(i)+'_0'+str(j)
            else:
                extraurl = str(i)+'_'+str(j)
            if i in [2010,2011,2012]:
                if(i == 2012 and j>=14):
                    url = baseurl + extraurl + r"/index.html"
                else:
                    url = baseurl + '1_' + extraurl + ".html"
            else:
                url = baseurl + extraurl + r"/index.html"
            soup = do_soup(url)  #使用函数
            if soup == none:
                continue
            per_aut_list = soup.find_all('td',class_="author")
            if tag==1:
                for k in per_aut_list:
                    aut_set.add(k.string)
                print("{}年{}期作者已入库".format(i,j))
            else:
                for k in per_aut_list:
                    a_list.append(k.string)
    if tag==1:
        return list(aut_set)    #返回了一个去重后的作者列表
    else:
        return a_list          #返回了一个有重复元素的列表,用于计数
    
def main():
    author_list0 = search_each_per(1)   # 1代表一个控制标记,接收无重列表
    print("正在接收有重复数据列表,请等待...")
    a_list = search_each_per(0)         #接收有重复元素列表
    result = {}                         #放结果的字典
    for i in author_list0:
        result[str(i)] = 0     #初始化统计结果
        for j in a_list:
            if i==j:
                result[str(i)] += 1
    #下面对结果按发表次数做降序处理
    print("下面对结果按发表次数做降序处理...")
    att = []              #做一个容器
    for key,value in result.items():
        j={}
        j["author"]=key
        j["times"]=value
        att.append(j)
    att.sort(key = lambda x:x["times"],reverse = true)
    # 将结果写入text文本中
    print("将结果写入text文本中,请耐心等待...")
    path = os.getcwd()
    filename = path + "读者作者结果1.txt"
    new = open(filename,"w",errors='ignore')   #网络字节流中可能有不合法字符要忽略:illegal multibyte sequence
    for i in att:
        author = i["author"]
        times = i["times"]
        print(author)
        print(times)
        if author == none:                       #unsupported operand type(s) for +: 'nonetype' and 'str'
            new.write("none" +"\t" + str(times) + "\n")
        else:
            new.write(author +"\t" + str(times) + "\n")
    new.close()
    print("完成统计")

main()

第二种:

根据规律手工构造各个期刊的url

爬取一遍,用列表(容器)放集合(数据集),并设置tag标签,判断列表中是否有此作者,没有就增加,有就 times+1

直接返回列表,并按照列表中times关键字对字典(相当于数据集合)整体排序

写入结果到txt文件

from bs4 import beautifulsoup as be
import requests as req
import os

baseurl = "http://www.52duzhe.com/"

def do_soup(url):
    try:
        r = req.get(url,headers={'user-agent':'mozilla/5.0'})
        r.raise_for_status()
        r.encoding = r.apparent_encoding
        html = r.text
        soup = be(html,'lxml')
        return soup
    except:
        print("获取"+url+"失败")


def search_each_per():
    obj_list = []
    for i in range(2010,2018):
        for j in range(1,25):
            if j<10:
                extraurl = str(i)+'_0'+str(j)
            else:
                extraurl = str(i)+'_'+str(j)
            if i in [2010,2011,2012]:
                if(i == 2012 and j>=14):
                    url = baseurl + extraurl + r"/index.html"
                else:
                    url = baseurl + '1_' + extraurl + ".html"
            else:
                url = baseurl + extraurl + r"/index.html"
            soup = do_soup(url)  #使用函数
            if soup == none:
                continue
            per_aut_list = soup.find_all('td',class_="author")
            for it in per_aut_list:               #别习惯性用i,换个名字,是个坑
                tag = 0
                for jk in obj_list:
                    if(jk["author"] == it.string):
                        jk["times"] += 1
                        tag = 1
                        break
                if(tag == 0):
                    obj = {"author":it.string,"times":1}
                    obj_list.append(obj)
    return obj_list
    
def main():
    print("正在创建结果对象列表,请耐心等待...")
    obj_list = search_each_per()          #接受结果列表
                                          #下面对结果按发表次数做降序处理
    print("下面对结果按发表次数做降序处理...")
    obj_list.sort(key = lambda x:x["times"],reverse = true)
                                          # 将结果写入text文本中
    print("将结果写入text文本中,请耐心等待...")
    path = os.getcwd()
    filename = path + "读者作者结果3.txt"
    new = open(filename,"w",errors='ignore')  #网络字节流中可能有不合法字符要忽略:illegal multibyte sequence
    for i in obj_list:
        author = i["author"]
        times = i["times"]
        print(author)
        print(times)
        if author == none:                       #unsupported operand type(s) for +: 'nonetype' and 'str'
            new.write("none" +"\t" + str(times) + "\n")
        else:
            new.write(author +"\t" + str(times) + "\n")
    new.close()
    print("完成统计")

main()

第三种:

使用类创建对象(数据集)

直接在主页爬取各期刊链接,放在列表里,遍历列表查询各期刊作者

爬取一遍,用列表(容器)放对象(数据集),并设置tag标签,判断列表中是否有此对象,没有就实例化一个对象并更新列表,有就 对象.times+1

直接返回列表,并按照列表中对象的times属性对对象(相当于数据集合)整体排序

写入结果到txt文件

class author(object):
    def __init__(self,name):
        self.name = name
        self.times = 1

from bs4 import beautifulsoup as be
import requests as req
import os

baseurl = "http://www.52duzhe.com/"

def do_soup(url):
    try:
        r = req.get(url)
        r.raise_for_status()
        r.encoding = r.apparent_encoding
        html = r.text
        soup = be(html,'lxml')
        return soup
    except:
        print("获取"+url+"失败")


def search_each_per():
    url_list = []
    obj_list = []
    soup = do_soup(baseurl)
    link = soup.select(".booklist a")   #获得链接,放回字典
    for item in link:
        url = baseurl +item["href"]
        url_list.append(url)
    for url in url_list:
        soup = do_soup(url)  #使用函数
        if soup == none:
            continue
        per_aut_list = soup.find_all('td',class_="author")
        for i in per_aut_list:
            tag = 0
            for j in obj_list:
                if(j.name == i.string):
                    j.times += 1
                    tag = 1
                    break
            if(tag == 0):
                obj = author(i.string)
                obj_list.append(obj)
    return obj_list
    
def main():
    print("正在创建对象列表,请等待...........")
    obj_list = search_each_per()
                                                        #下面对结果按发表次数做降序处理
    print("下面对结果按发表次数做降序处理...")
    obj_list.sort(key = lambda obj:obj.times,reverse = true)
    # 将结果写入text文本中
    print("将结果写入text文本中,请耐心等待...")
    path = os.getcwd()
    filename = path + "读者作者结果2.txt"
    new = open(filename,"w",errors="ignore")         #处理非法字符  illegal multibyte sequence
    for i in obj_list:
        author = i.name
        times = i.times
        print(author)
        print(times)
        if author == none:
            new.write("none" +"\t" + str(times) + "\n")
        else:
            new.write(author +"\t" + str(times) + "\n")
    new.close()
    print("完成统计")

main()