欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

莫烦 爬虫 百度百科

程序员文章站 2024-02-19 17:45:34
...

我的视频学习笔记

视频地址:https://www.bilibili.com/video/av17920849?p=6
源代码:https://morvanzhou.github.io/tutorials/data-manipulation/scraping/2-04-practice-baidu-baike/

import re
import random
from urllib.request import urlopen
from bs4 import BeautifulSoup

base_url = "https://baike.baidu.com"
his = ["/item/%E7%BD%91%E7%BB%9C%E7%88%AC%E8%99%AB/5162711"]

for i in range(20):
    url = base_url + his[-1]

    html = urlopen(url).read().decode('utf-8')
    soup = BeautifulSoup(html, features='lxml')
    print(i, soup.find('h1').get_text(), '    url: ', his[-1])

    # find valid urls
    sub_urls = soup.find_all("a", {"target": "_blank", "href": re.compile("/item/(%.{2})+$")})

    if len(sub_urls) != 0:
        his.append(random.sample(sub_urls, 1)[0]['href'])
    else:
        # no valid sub link found
        his.pop()


getaddinfo是获取地址信息失败。