爬虫2
程序员文章站
2022-03-02 21:00:01
...
bs4基本使用
from bs4 import BeautifulSoup
from bs4.element import Tag
# 多加练习,Python,使用,知识变成你的了
data = '''<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>soup测试</title>
<title class="warm" id="hello" href="http://www.google.com/">你那温情的一笑,搞得我瑟瑟发抖</title>
</head>
<body>
<div class="tang">
<ul>
<li class="hello" id="world"><a id="world" href="http://www.baidu.com" title="出塞"><!--秦时明月汉时关,万里长征人未还,但使龙城飞将在,不教胡马度阴山--></a></li>
<list><a href="https://www.baidu.com" title="出塞" style="font-weight: bold"><!--秦时明月汉时关,万里长征人未还,但使龙城飞将在,不教胡马度阴山--></a></list>
<li><ul href="http://www.163.com" class="taohua" title="huahua">人面不知何处去,桃花依旧笑春风</ul></li>
<lists class="hello"><a href="http://mi.com" id="hong" title="huahua">去年今日此门中,人面桃花相映红</a></lists>
<li id="wo"><div href="http://qq.com" name="he" id="gu">故人西辞黄鹤楼,烟花三月下扬州</div></li>
</ul>
<ul>
<li class="hello" id="sf"><a href="http://www.baidu.com" title="出塞"><!--秦时明月汉时关,万里长征人未还,但使龙城飞将在,不教胡马度阴山--></a></li>
<list><a href="https://www.baidu.com" title="出塞"><!--秦时明月汉时关,万里长征人未还,但使龙城飞将在,不教胡马度阴山--></a></list>
<li><a href="http://www.163.com" class="taohua">人面不知何处去,桃花依旧笑春风</a></li>
<lists class="hello"><a href="http://mi.com" id="fhsf">去年今日此门中,人面桃花相映红,不知桃花何处去,出门依旧笑楚风</a></lists>
<li id="fs"><a href="http://qq.com" name="he" id="gufds">故人西辞黄鹤楼,烟花三月下扬州</a></li>
</ul>
</div>
<div id="meng">
<p class="jiang">
<span>三国猛将</span>
<ol>
<pl>关羽</pl>
<li>张飞</li>
<li>赵云</li>
<zl>马超</zl>
<li>黄忠</li>
</ol>
<div class="cao" id="h2">
<ul>
<li>典韦</li>
<li>许褚</li>
<li>张辽</li>
<li>张郃</li>
<li>于禁</li>
<li>夏侯惇</li>
</ul>
</div>
</p>
</div>
</body>
</html>'''
if __name__ == '__main__':
# 参数二,是解析器,lxml---->xpath
# soup整体,是Python对象:bs4.BeautifulSoup
soup = BeautifulSoup(data,'lxml')
# 查看soup结构
# Tag标签 bs4.element.Tag,属性方式进行查找,查到当前第一个
t = soup.title
print(t)
# print(type(t))
# bs4.element.NavigableString
s = soup.title.string
print(s)
# print(type(s))
# bs4.element.Comment
s = soup.li.string
print(s)
# print(type(s))
# 子节点,直接节点
# c = soup.body.div.ul.children
# print(c)
# for item in c:
# print(item)
ul = soup.body.div.ul
#孙节点
d = ul.children
for item in d:
print(item.string)
# 搜索文档树
# print(soup.find(name = 'li',id = 'wo'))
# # print(soup.find(name = 'li',attrs={'class':'hello','id':'world'}))
# lis = soup.find_all('li',class_ = 'hello')
# print(lis)
# css语法,soup.select()
# css语法通过标签查找
# ret = soup.select('li#wo')
# ret = soup.select('lists.hello a')
# print(ret)
# 通过类名查找
# print(soup.select('.hello'))
# 通过id查找 :#
# print(soup.select('#sf'))
# 组合查找
# print(soup.select('li#world'))
# 空格,找子标签
# print(soup.select('li #world'))
# print(soup.select('li a'))
# print(soup.select('div > ol > pl,zl'))
# 属性查找
# print(soup.select('div[class="cao"][id="h2"] > ul > li'))
# print(soup.select('div[class="cao"][id="h2"] > ul > li')[1])
# tag = soup.find_all(name='title')[1]
# print(tag)
# print(tag.name)
# print(tag.attrs)
# print(tag.attrs['href'])
# print(tag.string)
# print(tag['href'])
# print(tag.get_text())
jsonpath基本使用
import jsonpath
import json
data = '''{ "store": {
"book": [
{ "category": "reference",
"author": "李白",
"title": "Sayings of the Century",
"price": 8.95
},
{ "category": "fiction",
"author": "杜甫",
"title": "Sword of Honour",
"price": 12.99
},
{ "category": "fiction",
"author": "白居易",
"title": "Moby Dick",
"isbn": "0-553-21311-3",
"price": 8.99
},
{ "category": "fiction",
"author": "苏轼",
"title": "The Lord of the Rings",
"isbn": "0-395-19395-8",
"price": 22.99
}
],
"bicycle": {
"color": "red",
"price": 19.95
}
}
}'''
json_obj = json.loads(data, encoding='utf-8')
# print(jsonpath.jsonpath(json_obj,'$.store.book[*].author'))
# print(jsonpath.jsonpath(json_obj,'$..author'))
# print(jsonpath.jsonpath(json_obj,'$.store.book[?(@.price>12)]'))
# jsonpath 索引从0开始的
# print(jsonpath.jsonpath(json_obj,'$.store.book[0]'))
# @当前,当前列表长度 - 1 最后一个对象
# print(jsonpath.jsonpath(json_obj,'$.store.book[(@.length -1)]'))
print(jsonpath.jsonpath(json_obj,'$.store.book[?(@.isbn)]'))
上一篇: Synchronized的那些事
下一篇: 爬取国家统计局区划代码(多进程版)
推荐阅读
-
零基础写python爬虫之urllib2中的两个重要概念:Openers和Handlers
-
零基础写python爬虫之使用urllib2组件抓取网页内容
-
零基础写python爬虫之urllib2使用指南
-
使用Python的urllib和urllib2模块制作爬虫的实例教程
-
Python中urllib+urllib2+cookielib模块编写爬虫实战
-
Python中使用urllib2模块编写爬虫的简单上手示例
-
零基础写python爬虫之urllib2使用指南
-
零基础写python爬虫之使用urllib2组件抓取网页内容
-
零基础写python爬虫之urllib2中的两个重要概念:Openers和Handlers
-
【Python爬虫案例学习2】python多线程爬取youtube视频