无法爬取淘宝商品页面
程序员文章站
2022-03-06 14:01:36
问题描述:无法爬取淘宝商品页面案例如下:import requestsimport redef getHTMLText(url): try: r = requests.get(url,timeout=30) r.raise_for_status() r.encoding = r.apparent_encoding return r.text except: return "" de...
问题描述:无法爬取淘宝商品页面
案例如下:
import requests
import re
def getHTMLText(url):
try:
r = requests.get(url,timeout=30)
r.raise_for_status()
r.encoding = r.apparent_encoding
return r.text
except:
return ""
def parsePage(ilt,html):
try:
plt = re.findall(r'\"view_price\"\:\"[\d\.]*\"',html)
tlt = re.findall(r'\"raw_title\"\:\".*?\"',html)
for i in range(len(plt)):
price = eval(plt[i].split(':')[1])
title = eval(tlt[i].split(':')[1])
ilt.append([price,title])
except:
print("")
def printGoodsList(ilt):
tplt = "{:4}\t{:8}\t{:16}"
print(tplt.format("序号","价格","商品名称"))
count = 0
for g in ilt:
count = count + 1
print(tplt.format(count,g[0],g[1]))
def main():
goods = input("请输入要搜索的商品:")
depth = input("请输入想要搜索的商品的页数:")
depth = int(depth)
start_url = 'https://s.taobao.com/search?q=' + goods
infoList = []
for i in range(depth):
try:
url = start_url + '&s=' + str(44*i)
html = getHTMLText(url)
parsePage(infoList, html)
except:
continue
printGoodsList(infoList)
main()
代码编译没有报错,以下是运行结果:
内容并未爬取到。
原因分析:因为淘宝的反爬虫机制导致爬取不了数据
查看淘宝的robots协议
https://www.taobao.com/robots.txt
分析:第一行 淘宝将爬虫定义为百度蜘蛛,第二行 它不允许爬虫爬取以’/'开头的路径
解决方案:添加headers内容
进入淘宝页面输入你想要搜索的内容
这边我搜索了笔记本电脑。
按F12进入控制台
依次点击Network,选择ALL,点击serch,找到上图勾选的serch,右击选择Copy,选择Copy as cURL(bash)
然后进去下方链接
https://curl.trillworks.com/
将内容复制到curl command框内
将Python requests内容中的headers内容复制到第一个函数getHTMLText中,并将
r = requests.get(url,timeout=30)修改为
r = requests.get(url,headers=headers,timeout=30)
import requests
import re
def getHTMLText(url):
try:
headers = {
'authority': 's.taobao.com',
'cache-control': 'max-age=0',
'upgrade-insecure-requests': '1',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.66 Safari/537.36',
'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
'sec-fetch-site': 'same-origin',
'sec-fetch-mode': 'navigate',
'sec-fetch-user': '?1',
'sec-fetch-dest': 'document',
'referer': '********',
'accept-language': 'zh-CN,zh;q=0.9',
'cookie': '******',}
r = requests.get(url,headers=headers,timeout=30)
r.raise_for_status()
r.encoding = r.apparent_encoding
return r.text
except:
return ""
def parsePage(ilt,html):
try:
plt = re.findall(r'\"view_price\"\:\"[\d\.]*\"',html)
tlt = re.findall(r'\"raw_title\"\:\".*?\"',html)
for i in range(len(plt)):
price = eval(plt[i].split(':')[1])
title = eval(tlt[i].split(':')[1])
ilt.append([price,title])
except:
print("")
def printGoodsList(ilt):
tplt = "{:4}\t{:8}\t{:16}"
print(tplt.format("序号","价格","商品名称"))
count = 0
for g in ilt:
count = count + 1
print(tplt.format(count,g[0],g[1]))
def main():
goods = input("请输入要搜索的商品:")
depth = input("请输入想要搜索的商品的页数:")
depth = int(depth)
start_url = 'https://s.taobao.com/search?q=' + goods
infoList = []
for i in range(depth):
try:
url = start_url + '&s=' + str(44*i)
html = getHTMLText(url)
parsePage(infoList, html)
except:
continue
printGoodsList(infoList)
main()
referer和cookie的内容太长,这边我就用*代替了,大家复制自己的headers就可以了
这样就可以正常爬取了。
上述案例为嵩天老师Python网络爬虫与信息提取课程中的案例。(本博客方法只为学习讨论,不做商业用途)
本文地址:https://blog.csdn.net/weixin_44911081/article/details/110407955