欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

网络爬虫笔记(Day6)——妹子图

程序员文章站 2022-05-02 22:13:47
...

利用多进程爬取妹子图:http://www.mzitu.com

完整代码如下:

进程,参看博文  进程和线程——Python中的实现 

import requests
from lxml import etree
import os
import multiprocessing
from multiprocessing import Queue, Pool


def get_all_image_url(queue):
    '''获取每一页的所有一级链接'''
    page = 1
    while page < 120:
        print('当前正在下载页码:', page)
        url = 'http://www.mzitu.com/page/{}'.format(page)
        page += 1
        response = requests.get(url)
        html_ele = etree.HTML(response.text)
        href_list = html_ele.xpath('//ul[@id="pins"]/li/a/@href')
        for href in href_list:
            # print(href)
            parse_detailed_page(href, queue)


def parse_detailed_page(url_href, queue):
    '''获取进入第一级URL下的每一张图片的URL'''
    response = requests.get(url_href)
    html_ele = etree.HTML(response.text)
    max_page = html_ele.xpath('//div[@class="pagenavi"]/a/span/text()')[-2]
    for i in range(1, int(max_page)+1):
        page_url = url_href + '/' + str(i)
        response = requests.get(page_url)
        html_ele = etree.HTML(response.text)
        img_url = html_ele.xpath('//div[@class="main-image"]/p/a/img/@src')[0]
        # download_img(img_url, url_href)
        queue.put((img_url, url_href))


def download_img(img_url_referer_url):
    '''将Queue中的图片URL进行下载'''
    (img_url, referer) = img_url_referer_url
    headers = {
        'referer': referer,
        'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36',
    }
    if os.path.exists('download'):
        pass
    else:
        os.mkdir('download')
    filename = 'download/' + img_url.split('/')[-1]
    # request.urlretrieve(img_url, filename)
    response = requests.get(img_url, headers=headers)
    with open(filename, 'wb') as f:
        f.write(response.content)


if __name__ == '__main__':
    # 以下三行主要是获取image的url, 放到我们的queue中
    q = Queue()
    p = multiprocessing.Process(target=get_all_image_url, args=(q, ))
    p.start()
    
    # 开启的进程数量
    download_pool = Pool(5)
    while True:
        try:
            image_url_referer_url = q.get(timeout=60)
            download_pool.apply_async(download_img, (image_url_referer_url,))
        except:
            print('Queue is Empty!')
            break
    download_pool.close()
    download_pool.join()
    # 程序最后退出前进行join
    p.join()

 

相关标签: 网络爬虫

上一篇: 《网页爬虫》

下一篇: 关于几个坑