欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

python爬虫之xpath案例——全国城市名称爬取

程序员文章站 2022-05-07 23:08:46
...
# 需求:解析出所有城市名称
# url : https://www.aqistudy.cn/historydata/

import requests
from lxml import etree

# # 分别爬取热门城市和全部城市的信息: 即需要两个循环
# headers  = {
# 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.90 Safari/537.36 Edg/89.0.774.63'
# }
# url = 'https://www.aqistudy.cn/historydata/'
# page_text = requests.get(url=url,headers=headers).text
# tree = etree.HTML(page_text)
# hot_city_list = tree.xpath('//div[@class="bottom"]/ul/li')
# all_city_names = []
# # 解析到了热门城市的城市名称
# for li in hot_city_list:
#     hot_city_name = li.xpath('./a/text()')[0]
#     all_city_names.append(hot_city_name)
#
# # 解析的是全部城市的名称
# all_city_list = tree.xpath('//div[@class="bottom"]/ul/div[2]/li')
# for li in all_city_list:
#     all_city_name = li.xpath('./a/text()')
#     all_city_names.append(all_city_name)
#
# print(all_city_names,len(all_city_names))



# 能不能不分开爬取 直接一次性爬取呢?
headers  = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.90 Safari/537.36 Edg/89.0.774.63'
}
url = 'https://www.aqistudy.cn/historydata/'
page_text = requests.get(url=url,headers=headers).text
tree = etree.HTML(page_text)
city_names = []
# 同时解析到热门城市和所有城市的a标签
# 热门城市层级关系: div[@class="bottom"]/ul/li/a
# 所有城市层级关系: div[@class="bottom"]/ul/div[2]/li/a
# 使用 “ | ”运算符
a_list = tree.xpath('//div[@class="bottom"]/ul/li/a | //div[@class="bottom"]/ul/div[2]/li/a')
for a in a_list:
    city_name = a.xpath('./text()')
    city_names.append(city_name)
print(city_names,'\n',len(city_names))

注意,上述代码是两种方法,第一种是使用两个for循环对热门城市和全部城市分别爬取;第二种是一次性爬取。