欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

Queue

程序员文章站 2024-03-18 08:11:10
...

实现了异步的生产者、调用者模式,和python中的threads的queue一样。

Queue.get会直到队列中有值才会返回,如果队列设置了最大值,那么如果队列满了,则Queue.put会阻塞直到有了空位。Queue中保存的是未完成的任务,初始值是0,put增加,task_done减少。

爬虫例子:

起始,队列中值有一个base url,worker获取到一个页面然后解析,再放一个新的进来,在调用task_done来减少数量,最终所有的页面都爬取完了,队列中数量为0,主循环中获得通知。

# coding: utf-8
import time
from datetime import timedelta

try:
    from HTMLParser import HTMLParser
    from urlparse import urljoin, urldefrag
except ImportError:
    from html.parser import HTMLParser
    from urllib.parse import urljoin, urldefrag

from tornado import httpclient, gen, ioloop, queues

base_url = 'http://www.tornadoweb.org/en/stable/'
concurrency = 10


@gen.coroutine
def get_links_from_url(url):
    """
    从队列中取出一个url 然后解析

    :param url:
    :return:
    """

    try:
        response = yield httpclient.AsyncHTTPClient().fetch(url)
        print('fetched %s' % url)

        html = response.body if isinstance(response.body, str) else response.body.decode()
        urls = [urljoin(url, remove_fragment(new_url))
                for new_url in get_links(html)]
    except Exception as e:
        print('Exception: %s %s' % (e, url))
        raise gen.Return([])

    raise gen.Return(urls)


def remove_fragment(url):
    """
    清除url中的#

    :param url:
    :return:
    """
    pure_url, frag = urldefrag(url)
    return pure_url


def get_links(html):
    """
    获取html页面中的链接

    :param html:
    :return:
    """
    class URLSeeker(HTMLParser):
        def __init__(self):
            HTMLParser.__init__(self)
            self.urls = []

        def handle_starttag(self, tag, attrs):
            href = dict(attrs).get('href')
            if href and tag == 'a':
                self.urls.append(href)

    url_seeker = URLSeeker()
    url_seeker.feed(html)
    return url_seeker.urls


@gen.coroutine
def main():

    q = queues.Queue()
    start = time.time()
    fetching, fetched = set(), set()

    @gen.coroutine
    def fetch_url():
        current_url = yield q.get()  # 队列中取出一个url
        try:
            if current_url in fetching:
                return

            print('fetching %s' % current_url)
            fetching.add(current_url)  # 加入到正在爬取的集合中
            urls = yield get_links_from_url(current_url)  # 启动
            fetched.add(current_url)  # 加入到已经爬取完毕的集合中

            for new_url in urls:
                # 需要以base url开头的 不然有外链就爬的没完了
                if new_url.startswith(base_url):
                    yield q.put(new_url)  # 放入该url到队列中

        finally:
            q.task_done()  # 删除这个url

    @gen.coroutine
    def worker():
        while True:
            yield fetch_url()

    q.put(base_url)  # 放入base url

    for _ in range(concurrency):
        # 启动从currency个数的worker
        worker()

    yield q.join(timeout=timedelta(seconds=300))  # 直到队列空了才返回
    assert fetching == fetched
    print('Done in %d seconds, fetched %s URLs.' % (
        time.time() - start, len(fetched)))


if __name__ == '__main__':

    import logging
    logging.basicConfig()
    io_loop = ioloop.IOLoop.current()
    io_loop.run_sync(main)