python爬虫:爬取链家深圳全部二手房的详细信息
程序员文章站
2022-05-28 11:47:09
1、问题描述: 爬取链家深圳 全部 二手房的详细信息,并将爬取的数据存储到CSV文件中 2、思路分析: (1)目标网址:https://sz.lianjia.com/ershoufang/ (2)代码结构: (3) __init__(self)初始化函数 · hearders用到了fake_user ......
1、问题描述:
爬取链家深圳全部二手房的详细信息,并将爬取的数据存储到csv文件中
2、思路分析:
(1)目标网址:
(2)代码结构:
class lianjiaspider(object): def __init__(self): def getmaxpage(self, url): # 获取maxpage def parsepage(self, url): # 解析每个page,获取每个huose的link def parsedetail(self, url): # 根据link,获取每个house的详细信息
(3) init(self)初始化函数
· hearders用到了fake_useragent库,用来随机生成请求头。
· datas空列表,用于保存爬取的数据。
def __init__(self): self.headers = {"user-agent": useragent().random} self.datas = list()
(4) getmaxpage()函数
主要用来获取二手房页面的最大页数.
def getmaxpage(self, url): response = requests.get(url, headers = self.headers) if response.status_code == 200: source = response.text soup = beautifulsoup(source, "html.parser") pagedata = soup.find("div", class_ = "page-box house-lst-page-box")["page-data"] # pagedata = '{"totalpage":100,"curpage":1}',通过eval()函数把字符串转换为字典 maxpage = eval(pagedata)["totalpage"] return maxpage else: print("fail status: {}".format(response.status_code)) return none
(5)parsepage()函数
主要是用来进行翻页的操作,得到每一页的所有二手房的links链接。它通过利用一个for循环来重构 url实现翻页操作,而循环最大页数就是通过上面的 getmaxpage() 来获取到。
def parsepage(self, url): maxpage = self.getmaxpage(url) # 解析每个page,获取每个二手房的链接 for pagenum in range(1, maxpage+1 ): url = "https://sz.lianjia.com/ershoufang/pg{}/".format(pagenum) print("当前正在爬取: {}".format(url)) response = requests.get(url, headers = self.headers) soup = beautifulsoup(response.text, "html.parser") links = soup.find_all("div", class_ = "info clear") for i in links: link = i.find("a")["href"] #每个<info clear>标签有很多<a>,而我们只需要第一个,所以用find detail = self.parsedetail(link) self.datas.append(detail)
(6)parsedetail()函数
根据parsepage()函数获取的二手房link链接,向该链接发送请求,获取出详细页面信息。
def parsedetail(self, url): response = requests.get(url, headers = self.headers) detail = {} if response.status_code == 200: soup = beautifulsoup(response.text, "html.parser") detail["价格"] = soup.find("span", class_ = "total").text detail["单价"] = soup.find("span", class_ = "unitpricevalue").text detail["小区"] = soup.find("div", class_ = "communityname").find("a", class_ = "info").text detail["位置"] = soup.find("div", class_="areaname").find("span", class_="info").text detail["地铁"] = soup.find("div", class_="areaname").find("a", class_="supplement").text base = soup.find("div", class_ = "base").find_all("li") # 基本信息 detail["户型"] = base[0].text[4:] detail["面积"] = base[2].text[4:] detail["朝向"] = base[6].text[4:] detail["电梯"] = base[10].text[4:] return detail else: return none
(7)将数据存储到csv文件中
这里用到了 pandas 库的 dataframe() 方法,它默认的是按照列名的字典顺序排序的。想要自定义列的顺序,可以加columns字段。
# 将所有爬取的二手房数据存储到csv文件中 data = pd.dataframe(self.datas) # columns字段:自定义列的顺序(dataframe默认按列名的字典序排序) columns = ["小区", "户型", "面积", "价格", "单价", "朝向", "电梯", "位置", "地铁"] data.to_csv(".\lianjia_ii.csv", encoding='utf_8_sig', index=false, columns=columns)
3、效果展示
4、完整代码:
# -* coding: utf-8 *- #author: wangshx6 #data: 2018-11-07 #descriptinon: 爬取链家深圳全部二手房的详细信息,并将爬取的数据存储到csv文 import requests from bs4 import beautifulsoup import pandas as pd from fake_useragent import useragent class lianjiaspider(object): def __init__(self): self.headers = {"user-agent": useragent().random} self.datas = list() def getmaxpage(self, url): response = requests.get(url, headers = self.headers) if response.status_code == 200: source = response.text soup = beautifulsoup(source, "html.parser") pagedata = soup.find("div", class_ = "page-box house-lst-page-box")["page-data"] # pagedata = '{"totalpage":100,"curpage":1}',通过eval()函数把字符串转换为字典 maxpage = eval(pagedata)["totalpage"] return maxpage else: print("fail status: {}".format(response.status_code)) return none def parsepage(self, url): maxpage = self.getmaxpage(url) # 解析每个page,获取每个二手房的链接 for pagenum in range(1, maxpage+1 ): url = "https://sz.lianjia.com/ershoufang/pg{}/".format(pagenum) print("当前正在爬取: {}".format(url)) response = requests.get(url, headers = self.headers) soup = beautifulsoup(response.text, "html.parser") links = soup.find_all("div", class_ = "info clear") for i in links: link = i.find("a")["href"] #每个<info clear>标签有很多<a>,而我们只需要第一个,所以用find detail = self.parsedetail(link) self.datas.append(detail) # 将所有爬取的二手房数据存储到csv文件中 data = pd.dataframe(self.datas) # columns字段:自定义列的顺序(dataframe默认按列名的字典序排序) columns = ["小区", "户型", "面积", "价格", "单价", "朝向", "电梯", "位置", "地铁"] data.to_csv(".\lianjia_ii.csv", encoding='utf_8_sig', index=false, columns=columns) def parsedetail(self, url): response = requests.get(url, headers = self.headers) detail = {} if response.status_code == 200: soup = beautifulsoup(response.text, "html.parser") detail["价格"] = soup.find("span", class_ = "total").text detail["单价"] = soup.find("span", class_ = "unitpricevalue").text detail["小区"] = soup.find("div", class_ = "communityname").find("a", class_ = "info").text detail["位置"] = soup.find("div", class_="areaname").find("span", class_="info").text detail["地铁"] = soup.find("div", class_="areaname").find("a", class_="supplement").text base = soup.find("div", class_ = "base").find_all("li") # 基本信息 detail["户型"] = base[0].text[4:] detail["面积"] = base[2].text[4:] detail["朝向"] = base[6].text[4:] detail["电梯"] = base[10].text[4:] return detail else: return none if __name__ == "__main__": lianjia = lianjiaspider() lianjia.parsepage("https://sz.lianjia.com/ershoufang/")
上一篇: CentOS 7下安装Python3.6
下一篇: chsh 设置用户禁止登陆