欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

Python爬虫练习五:爬取 2017年统计用区划代码和城乡划分代码(附代码与全部数据)

程序员文章站 2022-05-04 18:51:01
...

  本文仅供学习,需要数据的文末有链接下载,请不要重复爬取。

  最近工作中,因为统计用区划代码和城乡划分代码更新了最新的2017版,需要爬取最新的数据。于是乎,本次花了一定精力,将整个2017版数据完完整整的爬了下来。相较于第一次爬虫练习的2016版,本次改进很多,主要特点如下。

  1、通过尝试爬取目标网址,发现相较于以往,竟然设置了反爬虫手段,在进行get请求的时候需要增加headers,模拟浏览器。

  2、创建headers列表,使用随机headers进行访问,反反爬虫,降低被反的可能性。

  3、采用断点续尝试请求的方式提高爬虫成功率。这一点很关键,不知道是目标网址的服务器不好还是反爬虫机制,经常会出现爬取某个次级网址的时候卡住,导致请求失败。

  4、使用了多进程技术,多个省份同时进行爬取,大大提高了效率。一开始使用的是单进程爬虫,有上万或数十万个网页需要爬取,发现爬完估计要一天。

  5、另外发现一个小问题,观察网页返回结果,显示网页是采用gb2312的方式编码的,但实际上如果获取网址采用gb2312的话,一些生僻的汉字会乱码,采用gbk就不会出现这样的问题。

  废话不多说,上代码。代码分成三部分,调度器、爬虫、下载器。

  调度器:Scheduler,创建目标网址列表,开辟进程池,并统一调度爬虫脚本与下载器。

import Spiders
import downloading
from multiprocessing import Pool

# 目标列表
aimurl="http://www.stats.gov.cn/tjsj/tjbz/tjyqhdmhcxhfdm/2017/"
aimurllist = ["11", "12", "13", "14", "15", "21", "22", "23", "31", "32", "33", "34", "35", "36", "37",
              "41", "42", "43", "44", "45", "46", "50", "51", "52", "53", "54", "61", "62", "63", "64", "65"]

def run_proc(url, num):
    print(num+' is running')
    (city, county, town, village) = Spiders.spider(url, num)
    downloading.download(city, county, town, village, num)
    print(num+' ended')


if __name__ == "__main__":
    p = Pool(8)
    for i in aimurllist:
        p.apply_async(run_proc, args=(aimurl, i))
    print('Waiting for all subprocesses done ...')
    p.close()  # 关闭进程池
    p.join()  # 等待开辟的所有进程执行完后,主进程才继续往下执行
    print('All subprocesses done')

 爬虫:spiders,主要就是写爬取网页的逻辑。通过创建headers列表,使用随机headers和采用断点续尝试请求的方式,提高成功率。另外,因为本次爬虫的深度较深,代码逻辑上是要好好思考的。感兴趣的详细看一下代码,应该可以理解爬取过程。

import requests
from bs4 import BeautifulSoup
import random
import time

# 选择随机headers,降低被反爬虫的可能性
ua_list = [
        "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv2.0.1) Gecko/20100101 Firefox/4.0.1",
        "Mozilla/5.0 (Windows NT 6.1; rv2.0.1) Gecko/20100101 Firefox/4.0.1",
        "Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; en) Presto/2.8.131 Version/11.11",
        "Opera/9.80 (Windows NT 6.1; U; en) Presto/2.8.131 Version/11.11",
        "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_0) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11",
        "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.62 Safari/537.36",
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1",
        "Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",
        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",
        "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",
        "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",
        "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
        "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
        "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
        "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",
        "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",
        "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"
]


# 采用断点续尝试请求的方式提高爬虫成功率 经过测试网络正常一般最多retry一次就能获得结果
def getsoup(url, num_retries = 6):
    user_agent = random.choice(ua_list)
    headers = {"User-Agent": user_agent}
    try:
        res = requests.get(url,headers=headers,timeout=10) #以get方法访问目标网址获取网页信息
        res.encoding = 'gbk'  # 该网页是以gbk的编码形式显示的
        soup = BeautifulSoup(res.text, 'html.parser')  # 使用美丽汤解析网页内容
        return soup
    except Exception as e:
        if num_retries >0:
            time.sleep(10)
            print(url)
            print('requests fail,retry the last th' + str(num_retries) + '  ' + time.ctime())
            return getsoup(url, num_retries-1)
        else:
            print("retry fail!")
            print("error: %s" % e + "   " + url)
            return #返回空值 程序运行报错停止


# 获取市级代码
def getsecond(url, num):
    city = {}
    soup = getsoup(url+num+'.html')
    for j in soup.select('.citytr '):
        # print(j)
        id = str(j.select('td')[0].text)#130100000000
        city[id[0:4]] = {'qhdm': id, 'name': j.select('td')[1].text, 'cxfldm': '0'}
    return city

# 获取区县级代码
def getthird(url, lists):
    county = {}
    for i in lists:
        soup = getsoup(url+i[0:2]+'/'+i+'.html')
        for j in soup.select('.countytr '):
            # print(j)
            id = str(j.select('td')[0].text)#130201000000
            county[id[0:6]] = {'qhdm': id, 'name': j.select('td')[1].text, 'cxfldm': '0'}
    return county


# 获取镇级代码 市辖区没有下级代码
def getfourth(url, lists):
    town = {}
    for i in lists:
        # print(url+i[0:2]+'/'+i[2:4]+'/'+i+'.html')
        soup = getsoup(url+i[0:2]+'/'+i[2:4]+'/'+i+'.html')
        for j in soup.select('.towntr '):
            # print(j)
            id = str(j.select('td')[0].text)  # 130202001000
            town[id[0:9]] = {'qhdm': id, 'name': j.select('td')[1].text, 'cxfldm': '0'}# 130202001
    return town


# 获取村级代码
def getfifth(url,lists):
    village = {}
    for i in lists:
        # print(url+i[0:2]+'/'+i[2:4]+'/'+i[4:6]+'/'+i+'.html')
        soup = getsoup(url+i[0:2]+'/'+i[2:4]+'/'+i[4:6]+'/'+i+'.html')
        for j in soup.select('.villagetr '):
            # print(j)
            id = str(j.select('td')[0].text)  # 110101001001
            village[id[0:12]] = {'qhdm': id, 'name': j.select('td')[2].text, 'cxfldm': j.select('td')[1].text}# 110101001001
    return village


def spider(aimurl, num):
    city = getsecond(aimurl, num)
    print(num + ' city finished!')
    county = getthird(aimurl, city)
    print(num + ' county finished!')
    town = getfourth(aimurl, county)
    print(num + ' town finished!')
    village = getfifth(aimurl, town)
    print(num + ' village finished!')
    print(num + " crawl finished!Now,writing into txt...")
    return city, county, town, village

  细心的同学可以发现,怎么没有定义getfirst函数,实际上是有的,单进程脚本中是存在的。在单进程脚本中的思想是,先将全部的省级直辖市级爬取到,再将全部的市级爬取到,依此类推至村级。而在多进程脚本中,是先按照省为单位进行划分,再在每个省中依序爬取的。

  下载器:downloading,就是将爬取的内容下载下来,写进文本里。

def download(city, county, town, village, num):
    path = r'E:\tjyqhdmhcxhfdm2017\tjyqhdmhcxhfdm2017_ ' + num + '.txt'
    dic = {**city, **county, **town, **village}#字典合并
    for i in dic.values():
        with open(path, 'a', encoding='utf-8') as f:
            f.write('"'+i['qhdm']+'","'+i['name']+'","'+i['cxfldm']+'"'+'\n')
    print(num+" write finished!")

  完整的代码如上,感兴趣的可以复制下来尝试运行。注意:

 1、请先在下载器中定义好下载地址,提前创建好文件夹。

 2、在执行代码爬取一遍后,很有可能文件数没有31个,那是因为在爬取过程中,就算有断点续请求的方式,仍然会有一定可能性没有成功获取导致某个省的数据爬取失败。此时只要查看运行时监视器的输出结果,搜索error,即可定位到具体哪个省没有爬取成功。通过更改调度器中aimurllist,再次爬取即可。所以如果采用单进程的方式,几乎不可能完整的将整个数据下载下来。

  最后,附上全部2017年统计用区划代码和城乡划分代码(截止2017年10月31日)数据,链接如下。

  链接: https://pan.baidu.com/s/1zbQyKx1zyh4oSmi-j_WIOQ 密码: bfcu