关于asyncio知识(二)
一、asyncio之—-入门初探
通过上一篇关于asyncio的整体介绍,看过之后基本对asyncio就有一个基本认识,如果是感兴趣的小伙伴相信也会尝试写一些小代码尝试用了,那么这篇文章会通过一个简单的爬虫程序,从简单到复杂,一点一点的改进程序以达到我们想要的效果.
https://github.com/hackernews/api 这里是关于hn的api的使用说明,这次写的爬虫就是调用这里的api接口,用到的模块是aiohttp 发起的请求,切记这里是不能用requests模块的。关于aiohttp的文档:
下面我们看具体的代码实现,这个代码主要就是爬取其中一个连接下的所有评论,如果不传递id的情况,默认就是爬取id为8863的评论
import asyncio import argparse import logging from urllib.parse import urlparse, parse_qs from datetime import datetime import aiohttp import async_timeout logger_format = '%(asctime)s %(message)s' url_template = "https://hacker-news.firebaseio.com/v0/item/{}.json" fetch_timeout = 10 parser = argparse.argumentparser( description='获取所有请求url的所有评论') parser.add_argument('--id', type=int, default=8863, help='请求的id, 默认id 是8863') parser.add_argument('--url', type=str, help='hn的url地址') parser.add_argument('--verbose', action='store_true', help='详细的输出') logging.basicconfig(format=logger_format, datefmt='[%h:%m:%s]') log = logging.getlogger() log.setlevel(logging.info) fetch_counter = 0 async def fetch(session, url): """ 通过aiohttp访问url并返回json格式数据 """ global fetch_counter with async_timeout.timeout(fetch_timeout): fetch_counter += 1 # 因为接口需要fq才能访问,所以这里我用的是我本地的代理 async with session.get(url,proxy='http://127.0.0.1:1081') as response: return await response.json() async def post_number_of_comments(loop, session, post_id): """ 递归获取当前请求url的所有评论 """ url = url_template.format(post_id) now = datetime.now() response = await fetch(session, url) log.debug('{:^6} > fetching of {} took {} seconds'.format( post_id, url, (datetime.now() - now).total_seconds())) if 'kids' not in response: #表示没有评论 return 0 # 获取当前请求的url的评论的数量 number_of_comments = len(response['kids']) log.debug('{:^6} > fetching {} child posts'.format( post_id, number_of_comments)) tasks = [post_number_of_comments( loop, session, kid_id) for kid_id in response['kids']] # 获取所有协程的执行的结果 results = await asyncio.gather(*tasks) number_of_comments += sum(results) log.debug('{:^6} > {} comments'.format(post_id, number_of_comments)) return number_of_comments def id_from_hn_url(url): """ 获取运行时传递的参数中的id """ parse_result = urlparse(url) try: return parse_qs(parse_result.query)['id'][0] except (keyerror, indexerror): return none async def main(loop, post_id): now = datetime.now() async with aiohttp.clientsession(loop=loop) as session: now = datetime.now() comments = await post_number_of_comments(loop, session, post_id) log.info( '> calculating comments took {:.2f} seconds and {} fetches'.format( (datetime.now() - now).total_seconds(), fetch_counter)) return comments if __name__ == '__main__': args = parser.parse_args() if args.verbose: log.setlevel(logging.debug) post_id = id_from_hn_url(args.url) if args.url else args.id loop = asyncio.get_event_loop() comments = loop.run_until_complete(main(loop, post_id)) log.info("-- post {} has {} comments".format(post_id, comments)) loop.close()
再次提醒该url请求的时候是需要fq才能访问到,所以我这里加了本地的代理,以便能够爬取到内容,正常的请求结果如下:
[23:24:37] > calculating comments took 2.98 seconds and 73 fetches [23:24:37] -- post 8863 has 72 comments
如果没有fq就是如下错误了:
traceback (most recent call last): file "/users/zhaofan/vs_python/python_asyncio/ex1.py", line 41, in fetch async with session.get(url) as response: file "/usr/local/lib/python3.7/site-packages/aiohttp/client.py", line 1005, in __aenter__ self._resp = await self._coro file "/usr/local/lib/python3.7/site-packages/aiohttp/client.py", line 476, in _request timeout=real_timeout file "/usr/local/lib/python3.7/site-packages/aiohttp/connector.py", line 522, in connect proto = await self._create_connection(req, traces, timeout) file "/usr/local/lib/python3.7/site-packages/aiohttp/connector.py", line 854, in _create_connection req, traces, timeout) file "/usr/local/lib/python3.7/site-packages/aiohttp/connector.py", line 974, in _create_direct_connection req=req, client_error=client_error) file "/usr/local/lib/python3.7/site-packages/aiohttp/connector.py", line 924, in _wrap_create_connection await self._loop.create_connection(*args, **kwargs)) file "/usr/local/cellar/python/3.7.2_2/frameworks/python.framework/versions/3.7/lib/python3.7/asyncio/base_events.py", line 946, in create_connection await self.sock_connect(sock, address) file "/usr/local/cellar/python/3.7.2_2/frameworks/python.framework/versions/3.7/lib/python3.7/asyncio/selector_events.py", line 464, in sock_connect return await fut concurrent.futures._base.cancellederror during handling of the above exception, another exception occurred: traceback (most recent call last): file "/users/zhaofan/vs_python/python_asyncio/ex1.py", line 115, in <module> comments = loop.run_until_complete(main(loop, post_id)) file "/usr/local/cellar/python/3.7.2_2/frameworks/python.framework/versions/3.7/lib/python3.7/asyncio/base_events.py", line 584, in run_until_complete return future.result() file "/users/zhaofan/vs_python/python_asyncio/ex1.py", line 99, in main comments = await post_number_of_comments(loop, session, post_id) file "/users/zhaofan/vs_python/python_asyncio/ex1.py", line 51, in post_number_of_comments response = await fetch(session, url) file "/users/zhaofan/vs_python/python_asyncio/ex1.py", line 42, in fetch return await response.json() file "/usr/local/lib/python3.7/site-packages/async_timeout/__init__.py", line 45, in __exit__ self._do_exit(exc_type) file "/usr/local/lib/python3.7/site-packages/async_timeout/__init__.py", line 92, in _do_exit raise asyncio.timeouterror concurrent.futures._base.timeouterror
还有就是上面的代码中我们使用了results = await asyncio.gather(*tasks)
等待所有的协程执行完成并返回结果,关于gather的官网文档地址:
并且在上面的使用中我们也用到了递归,你可能感觉还挺简单的,代码看着和我们平时的写的阻塞式的代码好像区别也不是特别大,保持这种愉悦感,接着看
二、asyncio之—-更进一步
那么我们现在想要的是当我们的爬虫程序爬取评论的时候,我们想要当评论超过一定阈值的贴帖子发邮件通知告诉我们,其实这个功能是非常有必要的,就拿我的个人博客站来说,如果你想要经常看我的文章,又不想经常来我的站看,只想看大家都关注的那些文章,或者评论比较多的文章,所以我们接着将我们的代码进行更改:
import asyncio import argparse import logging import random from urllib.parse import urlparse, parse_qs from datetime import datetime import aiohttp import async_timeout logger_format = '%(asctime)s %(message)s' url_template = "https://hacker-news.firebaseio.com/v0/item/{}.json" fetch_timeout = 10 # 我们设置的评论的阈值 min_comments = 2 parser = argparse.argumentparser( description='获取所有请求url的所有评论') parser.add_argument('--id', type=int, default=8863, help='请求的id, 默认id 是8863') parser.add_argument('--url', type=str, help='hn的url地址') parser.add_argument('--verbose', action='store_true', help='详细的输出') logging.basicconfig(format=logger_format, datefmt='[%h:%m:%s]') log = logging.getlogger() log.setlevel(logging.info) fetch_counter = 0 async def fetch(session, url): """ 通过aiohttp访问url并返回json格式数据 """ global fetch_counter with async_timeout.timeout(fetch_timeout): fetch_counter += 1 # 因为接口需要fq才能访问,所以这里我用的是我本地的代理 async with session.get(url,proxy='http://127.0.0.1:1081') as response: return await response.json() async def post_number_of_comments(loop, session, post_id): """ 递归获取当前请求url的所有评论 """ url = url_template.format(post_id) now = datetime.now() response = await fetch(session, url) log.debug('{:^6} > fetching of {} took {} seconds'.format( post_id, url, (datetime.now() - now).total_seconds())) if 'kids' not in response: #表示没有评论 return 0 # 获取当前请求的url的评论的数量 number_of_comments = len(response['kids']) log.debug('{:^6} > fetching {} child posts'.format( post_id, number_of_comments)) tasks = [post_number_of_comments( loop, session, kid_id) for kid_id in response['kids']] # 获取所有任务的结果 results = await asyncio.gather(*tasks) number_of_comments += sum(results) log.debug('{:^6} > {} comments'.format(post_id, number_of_comments)) if number_of_comments> min_comments: await email_post(response) return number_of_comments async def email_post(post): """ 模拟发邮件的动作,并没有真的发邮件 """ await asyncio.sleep(random.random()*3) log.info("email send success") def id_from_hn_url(url): """ 获取运行时传递的参数中的id """ parse_result = urlparse(url) try: return parse_qs(parse_result.query)['id'][0] except (keyerror, indexerror): return none async def main(loop, post_id): now = datetime.now() async with aiohttp.clientsession(loop=loop) as session: now = datetime.now() comments = await post_number_of_comments(loop, session, post_id) log.info( '> calculating comments took {:.2f} seconds and {} fetches'.format( (datetime.now() - now).total_seconds(), fetch_counter)) return comments if __name__ == '__main__': args = parser.parse_args() if args.verbose: log.setlevel(logging.debug) post_id = id_from_hn_url(args.url) if args.url else args.id loop = asyncio.get_event_loop() comments = loop.run_until_complete(main(loop, post_id)) log.info("-- post {} has {} comments".format(post_id, comments)) loop.close()
运行结果如下:
[23:24:17] email send success [23:24:18] email send success [23:24:18] email send success [23:24:19] email send success [23:24:19] email send success [23:24:20] email send success [23:24:21] email send success [23:24:21] email send success [23:24:24] email send success [23:24:24] > calculating comments took 10.09 seconds and 73 fetches [23:24:24] -- post 8863 has 72 comments
你会发现这次花费的时间比我们之前多了,因为我们在发送邮件的地方是 await email_post(response) 那么我们的的程序再这里就会等到知道这个任务完成,其实对我们来说我们更关注的是我们的主要任务,获取所有的评论结果,而发送邮件通知我们的次级任务,那么我们需要怎么改进,让我们的主要的任务继续执行,不用去等待子任务的执行呢?在asyncio的api文档中有ensure_future ,这个需要注意:在python3.7之前用的是这个方法,但3.7之后更推荐用create_task的方法 具体地址为:
这里明确说明了:
asyncio.create_task(coro)
wrap the coro coroutine into a task and schedule its execution. return the task object.
the task is executed in the loop returned by get_running_loop(), runtimeerror is raised if there is no running loop in current thread.
this function has been added in python 3.7. prior to python 3.7, the low-level asyncio.ensure_future() function can be used instead:
通过这个方法我们可以将我们的任务安排一个协程运行,将其包装在task对象中并返回它,既然这样我们就将代码继续更改:
将await email_post(response) 这样代码替换为:asyncio.ensure_future(email_post(response))
但是当我们运行后发现不幸的事情发生了:
[23:40:06] email send success [23:40:06] > calculating comments took 3.30 seconds and 73 fetches [23:40:06] -- post 8863 has 72 comments [23:40:06] task was destroyed but it is pending! task: <task pending coro=<email_post() done, defined at ex1.py:76> wait_for=<future pending cb=[<taskwakeupmethwrapper object at 0x1087dde58>()]>> [23:40:06] task was destroyed but it is pending! task: <task pending coro=<email_post() done, defined at ex1.py:76> wait_for=<future pending cb=[<taskwakeupmethwrapper object at 0x108a9e4f8>()]>> [23:40:06] task was destroyed but it is pending! task: <task pending coro=<email_post() done, defined at ex1.py:76> wait_for=<future pending cb=[<taskwakeupmethwrapper object at 0x108a9e9a8>()]>> [23:40:06] task was destroyed but it is pending! task: <task pending coro=<email_post() done, defined at ex1.py:76> wait_for=<future pending cb=[<taskwakeupmethwrapper object at 0x108a9e918>()]>> [23:40:06] task was destroyed but it is pending! task: <task pending coro=<email_post() done, defined at ex1.py:76> wait_for=<future pending cb=[<taskwakeupmethwrapper object at 0x108a9ee88>()]>> [23:40:06] task was destroyed but it is pending! task: <task pending coro=<email_post() done, defined at ex1.py:76> wait_for=<future pending cb=[<taskwakeupmethwrapper object at 0x108a9ef48>()]>> [23:40:06] task was destroyed but it is pending! task: <task pending coro=<email_post() done, defined at ex1.py:76> wait_for=<future pending cb=[<taskwakeupmethwrapper object at 0x108a9efd8>()]>> [23:40:06] task was destroyed but it is pending! task: <task pending coro=<email_post() done, defined at ex1.py:76> wait_for=<future pending cb=[<taskwakeupmethwrapper object at 0x1087dde28>()]>>
看到这个错误不要慌,这个也是很多初学asyncio的或者刚开始用的时候都会碰到的问题,并且这个问题我们在上一篇asyncio的文章也说明了原因,在这里其实就是post_number_of_comments协程返回后立即强行关闭循环,让我们的log_post任务没有时间完成,怎么解决呢? 我们继续改代码:
import asyncio import argparse import logging import random from urllib.parse import urlparse, parse_qs from datetime import datetime import aiohttp import async_timeout logger_format = '%(asctime)s %(message)s' url_template = "https://hacker-news.firebaseio.com/v0/item/{}.json" fetch_timeout = 10 # 我们设置的评论的阈值 min_comments = 2 parser = argparse.argumentparser( description='获取所有请求url的所有评论') parser.add_argument('--id', type=int, default=8863, help='请求的id, 默认id 是8863') parser.add_argument('--url', type=str, help='hn的url地址') parser.add_argument('--verbose', action='store_true', help='详细的输出') logging.basicconfig(format=logger_format, datefmt='[%h:%m:%s]') log = logging.getlogger() log.setlevel(logging.info) fetch_counter = 0 async def fetch(session, url): """ 通过aiohttp访问url并返回json格式数据 """ global fetch_counter with async_timeout.timeout(fetch_timeout): fetch_counter += 1 # 因为接口需要fq才能访问,所以这里我用的是我本地的代理 async with session.get(url,proxy='http://127.0.0.1:1081') as response: return await response.json() async def post_number_of_comments(loop, session, post_id): """ 递归获取当前请求url的所有评论 """ url = url_template.format(post_id) now = datetime.now() response = await fetch(session, url) log.debug('{:^6} > fetching of {} took {} seconds'.format( post_id, url, (datetime.now() - now).total_seconds())) if 'kids' not in response: #表示没有评论 return 0 # 获取当前请求的url的评论的数量 number_of_comments = len(response['kids']) log.debug('{:^6} > fetching {} child posts'.format( post_id, number_of_comments)) tasks = [post_number_of_comments( loop, session, kid_id) for kid_id in response['kids']] # 获取所有任务的结果 results = await asyncio.gather(*tasks) number_of_comments += sum(results) log.debug('{:^6} > {} comments'.format(post_id, number_of_comments)) if number_of_comments> min_comments: # await email_post(response) asyncio.ensure_future(email_post(response)) return number_of_comments async def email_post(post): """ 模拟发邮件的动作,并没有真的发邮件 """ await asyncio.sleep(random.random()*3) log.info("email send success") def id_from_hn_url(url): """ 获取运行时传递的参数中的id """ parse_result = urlparse(url) try: return parse_qs(parse_result.query)['id'][0] except (keyerror, indexerror): return none async def main(loop, post_id): now = datetime.now() async with aiohttp.clientsession(loop=loop) as session: now = datetime.now() comments = await post_number_of_comments(loop, session, post_id) log.info( '> calculating comments took {:.2f} seconds and {} fetches'.format( (datetime.now() - now).total_seconds(), fetch_counter)) return comments if __name__ == '__main__': args = parser.parse_args() if args.verbose: log.setlevel(logging.debug) post_id = id_from_hn_url(args.url) if args.url else args.id loop = asyncio.get_event_loop() comments = loop.run_until_complete(main(loop, post_id)) log.info("-- post {} has {} comments".format(post_id, comments)) pending_tasks = [ task for task in asyncio.task.all_tasks() if not task.done() ] loop.run_until_complete(asyncio.gather(*pending_tasks)) loop.close()
运行之后结果如下:
[23:47:24] email send success [23:47:25] email send success [23:47:25] > calculating comments took 3.29 seconds and 73 fetches [23:47:25] -- post 8863 has 72 comments [23:47:25] email send success [23:47:25] email send success [23:47:25] email send success [23:47:26] email send success [23:47:26] email send success [23:47:27] email send success [23:47:27] email send success
一切似乎又恢复了正常,这里我们用到了asyncio的一个方法
asyncio.task.all_tasks()
这个其实还是非常有用的可以获取当前我们的loop的所有的任务的情况,我们这里是通过task.done() 来判断任务是否完成了,从而把没有让没有完成的任务都能够继续完成,但是我们这样做有一个不好的地方就是asyncio.task.all_tasks() 将所有的任务都拿到手了,可是有些并不是我们关注的,我们就只想要控制我们自己关注的,那么我们就可以将发邮件这个次级任务专门放到一起,这样方面我们后面处理,代码更改为:
import asyncio import argparse import logging import random from urllib.parse import urlparse, parse_qs from datetime import datetime import aiohttp import async_timeout logger_format = '%(asctime)s %(message)s' url_template = "https://hacker-news.firebaseio.com/v0/item/{}.json" fetch_timeout = 10 # 我们设置的评论的阈值 min_comments = 2 parser = argparse.argumentparser( description='获取所有请求url的所有评论') parser.add_argument('--id', type=int, default=8863, help='请求的id, 默认id 是8863') parser.add_argument('--url', type=str, help='hn的url地址') parser.add_argument('--verbose', action='store_true', help='详细的输出') logging.basicconfig(format=logger_format, datefmt='[%h:%m:%s]') log = logging.getlogger() log.setlevel(logging.info) fetch_counter = 0 async def fetch(session, url): """ 通过aiohttp访问url并返回json格式数据 """ global fetch_counter with async_timeout.timeout(fetch_timeout): fetch_counter += 1 # 因为接口需要fq才能访问,所以这里我用的是我本地的代理 async with session.get(url,proxy='http://127.0.0.1:1081') as response: return await response.json() async def post_number_of_comments(loop, session, post_id): """ 递归获取当前请求url的所有评论 """ url = url_template.format(post_id) now = datetime.now() response = await fetch(session, url) log.debug('{:^6} > fetching of {} took {} seconds'.format( post_id, url, (datetime.now() - now).total_seconds())) if 'kids' not in response: #表示没有评论 return 0 # 获取当前请求的url的评论的数量 number_of_comments = len(response['kids']) log.debug('{:^6} > fetching {} child posts'.format( post_id, number_of_comments)) tasks = [post_number_of_comments( loop, session, kid_id) for kid_id in response['kids']] # 获取所有任务的结果 results = await asyncio.gather(*tasks) number_of_comments += sum(results) log.debug('{:^6} > {} comments'.format(post_id, number_of_comments)) if number_of_comments> min_comments: # await email_post(response) task_registry.append(asyncio.ensure_future(email_post(response))) return number_of_comments async def email_post(post): """ 模拟发邮件的动作,并没有真的发邮件 """ await asyncio.sleep(random.random()*3) log.info("email send success") def id_from_hn_url(url): """ 获取运行时传递的参数中的id """ parse_result = urlparse(url) try: return parse_qs(parse_result.query)['id'][0] except (keyerror, indexerror): return none async def main(loop, post_id): now = datetime.now() async with aiohttp.clientsession(loop=loop) as session: now = datetime.now() comments = await post_number_of_comments(loop, session, post_id) log.info( '> calculating comments took {:.2f} seconds and {} fetches'.format( (datetime.now() - now).total_seconds(), fetch_counter)) return comments if __name__ == '__main__': args = parser.parse_args() if args.verbose: log.setlevel(logging.debug) post_id = id_from_hn_url(args.url) if args.url else args.id task_registry = [] # 用于存放我们发送邮件的次级任务 loop = asyncio.get_event_loop() comments = loop.run_until_complete(main(loop, post_id)) log.info("-- post {} has {} comments".format(post_id, comments)) pending_tasks = [ task for task in task_registry if not task.done() ] loop.run_until_complete(asyncio.gather(*pending_tasks)) loop.close()
执行结果如下:
[23:54:10] > calculating comments took 8.33 seconds and 73 fetches [23:54:10] -- post 8863 has 72 comments [23:54:11] email send success [23:54:11] email send success [23:54:11] email send success [23:54:12] email send success [23:54:12] email send success [23:54:12] email send success [23:54:12] email send success [23:54:13] email send success [23:54:13] email send success
看到这里,你是不是发现其实python的asyncio也没有那么难,貌似还挺好用的,那么我们接着最后一部分
三、asyncio之—-华山论剑
通过上面的代码的不断改进, 我们也渐渐更加熟悉asyncio 的用法,但是相对来说还是太简单,因为到目前为止,我们都在爬取一个url 下的所有评论,那么如果我想要获取多个url下的评论信息需要怎么改进代码。在hn 的api文档中有一个获取top 500的接口, 那么我们只获取前500中的前几个的所有评论,当然这个top 500 的内容每天肯能都会更新,甚至可能一天之内都会更新,所以我们的任务需要可以获取一次之后过一会再次获取一次数据,这样我们就能总是获取最新的数据,我们将代码继续改进:
import asyncio import argparse import logging from datetime import datetime import aiohttp import async_timeout logger_format = '%(asctime)s %(message)s' url_template = "https://hacker-news.firebaseio.com/v0/item/{}.json" top_stories_url = "https://hacker-news.firebaseio.com/v0/topstories.json" fetch_timeout = 10 parser = argparse.argumentparser( description='获取hacker news 文章的评论数') parser.add_argument( '--period', type=int, default=5, help='每个任务的间隔时间') parser.add_argument( '--limit', type=int, default=5, help='获取top 500的前n 数量内容默认是前500的前5个') parser.add_argument('--verbose', action='store_true', help='更加详细的输出') logging.basicconfig(format=logger_format, datefmt='[%h:%m:%s]') log = logging.getlogger() log.setlevel(logging.info) fetch_counter = 0 async def fetch(session, url): """ 请求url地址返回json格式数据 """ global fetch_counter with async_timeout.timeout(fetch_timeout): fetch_counter += 1 async with session.get(url, proxy="http://127.0.0.1:1080") as response: return await response.json() async def post_number_of_comments(loop, session, post_id): """ 获取当前文章的数据,并递归获取所有评论 """ url = url_template.format(post_id) now = datetime.now() response = await fetch(session, url) log.debug('{:^6} > fetching of {} took {} seconds'.format( post_id, url, (datetime.now() - now).total_seconds())) if 'kids' not in response: # 没有评论 return 0 # 获取当前文章的评论数量 number_of_comments = len(response['kids']) log.debug('{:^6} > fetching {} child posts'.format( post_id, number_of_comments)) tasks = [post_number_of_comments( loop, session, kid_id) for kid_id in response['kids']] # 这里递归请求获取每条评论的评论 results = await asyncio.gather(*tasks) # 获取当前文章的总评论数 number_of_comments += sum(results) log.debug('{:^6} > {} comments'.format(post_id, number_of_comments)) return number_of_comments async def get_comments_of_top_stories(loop, session, limit, iteration): """ 获取top 500de 前5个 """ response = await fetch(session, top_stories_url) tasks = [post_number_of_comments( loop, session, post_id) for post_id in response[:limit]] results = await asyncio.gather(*tasks) for post_id, num_comments in zip(response[:limit], results): log.info("post {} has {} comments ({})".format( post_id, num_comments, iteration)) async def poll_top_stories_for_comments(loop, session, period, limit): """ 定时去请求获取前top 500 url """ global fetch_counter iteration = 1 while true: now = datetime.now() log.info("calculating comments for top {} stories. ({})".format( limit, iteration)) await get_comments_of_top_stories(loop, session, limit, iteration) log.info( '> calculating comments took {:.2f} seconds and {} fetches'.format( (datetime.now() - now).total_seconds(), fetch_counter)) log.info("waiting for {} seconds...".format(period)) iteration += 1 fetch_counter = 0 # 每个任务的间隔 await asyncio.sleep(period) async def main(loop, period, limit): async with aiohttp.clientsession(loop=loop) as session: comments = await poll_top_stories_for_comments(loop, session, period, limit) return comments if __name__ == '__main__': args = parser.parse_args() if args.verbose: log.setlevel(logging.debug) loop = asyncio.get_event_loop() loop.run_until_complete(main(loop, args.period, args.limit)) loop.close()
查看运行结果如下:
[16:24:28] calculating comments for top 5 stories. (1) [16:24:41] post 19334909 has 156 comments (1) [16:24:41] post 19333600 has 147 comments (1) [16:24:41] post 19335363 has 9 comments (1) [16:24:41] post 19330812 has 341 comments (1) [16:24:41] post 19333479 has 81 comments (1) [16:24:41] > calculating comments took 12.17 seconds and 740 fetches [16:24:41] waiting for 5 seconds... [16:24:46] calculating comments for top 5 stories. (2) [16:24:50] post 19334909 has 156 comments (2) [16:24:50] post 19333600 has 147 comments (2) [16:24:50] post 19335363 has 9 comments (2) [16:24:50] post 19330812 has 341 comments (2) [16:24:50] post 19333479 has 81 comments (2) [16:24:50] > calculating comments took 4.75 seconds and 740 fetches [16:24:50] waiting for 5 seconds... traceback (most recent call last):
运行结果我们看出来其实我们的每个任务并不是间隔5s,因为我的任务在 await get_comments_of_top_stories(loop, session, limit, iteration)
我们必须等到这个地方完成之后才会进入下次循环,但是其实有时候我们并不想等待,而是直接想要继续往下走,那么我们还是通过老办法通过ensure_future 实现,我们将那一行代码更改为:
asyncio.ensure_future(get_comments_of_top_stories(loop, session, limit, iteration))
再次运行结果之后:
[16:44:07] calculating comments for top 5 stories. (1) [16:44:07] > calculating comments took 0.00 seconds and 0 fetches [16:44:07] waiting for 5 seconds... [16:44:12] calculating comments for top 5 stories. (2) [16:44:12] > calculating comments took 0.00 seconds and 49 fetches [16:44:12] waiting for 5 seconds... [16:44:17] calculating comments for top 5 stories. (3) [16:44:17] > calculating comments took 0.00 seconds and 1044 fetches [16:44:17] waiting for 5 seconds... [16:44:21] post 19334909 has 159 comments (1) [16:44:21] post 19333600 has 150 comments (1) [16:44:21] post 19335363 has 13 comments (1) [16:44:21] post 19330812 has 342 comments (1) [16:44:21] post 19333479 has 81 comments (1) [16:44:22] post 19334909 has 159 comments (3) [16:44:22] post 19333600 has 150 comments (3) [16:44:22] post 19335363 has 13 comments (3) [16:44:22] post 19330812 has 342 comments (3) [16:44:22] post 19333479 has 81 comments (3) [16:44:22] calculating comments for top 5 stories. (4) [16:44:22] > calculating comments took 0.00 seconds and 1158 fetches [16:44:22] waiting for 5 seconds... [16:44:23] post 19334909 has 159 comments (2) [16:44:23] post 19333600 has 150 comments (2) [16:44:23] post 19335363 has 13 comments (2) [16:44:23] post 19330812 has 342 comments (2) [16:44:23] post 19333479 has 81 comments (2) [16:44:26] post 19334909 has 159 comments (4) [16:44:26] post 19333600 has 150 comments (4) [16:44:26] post 19335363 has 13 comments (4) [16:44:26] post 19330812 has 343 comments (4) [16:44:26] post 19333479 has 81 comments (4) [16:44:27] calculating comments for top 5 stories. (5) [16:44:27] > calculating comments took 0.00 seconds and 754 fetches [16:44:27] waiting for 5 seconds...
这样我们每次任务的间隔倒是是5s了但是又一个问题出现了,花费0s并且0个fetch到,并且续的fetch数量也都不对 ,其实造成这个的原因都是因为不再等待get_comments_of_top_stories(loop, session, limit, iteration)造成的
这个时候你是不是又想到了你的老朋友 callback 呢 哈哈哈! 改进代码如下:
import asyncio import argparse import logging from datetime import datetime import aiohttp import async_timeout logger_format = '%(asctime)s %(message)s' url_template = "https://hacker-news.firebaseio.com/v0/item/{}.json" top_stories_url = "https://hacker-news.firebaseio.com/v0/topstories.json" fetch_timeout = 10 parser = argparse.argumentparser( description='获取hacker news 文章的评论数') parser.add_argument( '--period', type=int, default=5, help='每个任务的间隔时间') parser.add_argument( '--limit', type=int, default=5, help='获取top 500的前n 数量内容默认是前500的前5个') parser.add_argument('--verbose', action='store_true', help='更加详细的输出') logging.basicconfig(format=logger_format, datefmt='[%h:%m:%s]') log = logging.getlogger() log.setlevel(logging.info) class urlfetcher(): def __init__(self): self.fetch_counter = 0 async def fetch(self, session, url): with async_timeout.timeout(fetch_timeout): self.fetch_counter += 1 async with session.get(url, proxy="http://127.0.0.1:1080") as response: return await response.json() async def post_number_of_comments(loop, session, fetcher, post_id): """ 获取当前文章的数据,并递归获取所有评论 """ url = url_template.format(post_id) response = await fetcher.fetch(session, url) # 没有评论 if response is none or 'kids' not in response: return 0 number_of_comments = len(response['kids']) # # 获取当前文章的评论数量 tasks = [post_number_of_comments( loop, session, fetcher, kid_id) for kid_id in response['kids']] # 这里递归请求获取每条评论的评论 results = await asyncio.gather(*tasks) # 获取当前文章的总评论数 number_of_comments += sum(results) log.debug('{:^6} > {} comments'.format(post_id, number_of_comments)) return number_of_comments async def get_comments_of_top_stories(loop, session, limit, iteration): """ 获取top 500de 前5个 """ fetcher = urlfetcher() response = await fetcher.fetch(session, top_stories_url) tasks = [post_number_of_comments( loop, session, fetcher, post_id) for post_id in response[:limit]] results = await asyncio.gather(*tasks) for post_id, num_comments in zip(response[:limit], results): log.info("post {} has {} comments ({})".format( post_id, num_comments, iteration)) return fetcher.fetch_counter async def poll_top_stories_for_comments(loop, session, period, limit): """ 定时去请求获取前top 500 url """ iteration = 1 while true: log.info("calculating comments for top {} stories. ({})".format( limit, iteration)) future = asyncio.ensure_future( get_comments_of_top_stories(loop, session, limit, iteration)) now = datetime.now() # 这里通过回调的方式获取每次爬取评论的耗时以及爬取的评论的数量 def callback(fut): fetch_count = fut.result() log.info( '> calculating comments took {:.2f} seconds and {} fetches'.format( (datetime.now() - now).total_seconds(), fetch_count)) future.add_done_callback(callback) log.info("waiting for {} seconds...".format(period)) iteration += 1 await asyncio.sleep(period) async def main(loop, period, limit): async with aiohttp.clientsession(loop=loop) as session: comments = await poll_top_stories_for_comments(loop, session, period, limit) return comments if __name__ == '__main__': args = parser.parse_args() if args.verbose: log.setlevel(logging.debug) loop = asyncio.get_event_loop() loop.run_until_complete(main(loop, args.period, args.limit)) loop.close()
这次当我们再次执行代码运行结果如下:
[17:00:17] calculating comments for top 5 stories. (1) [17:00:17] waiting for 5 seconds... [17:00:22] calculating comments for top 5 stories. (2) [17:00:22] waiting for 5 seconds... [17:00:27] calculating comments for top 5 stories. (3) [17:00:27] waiting for 5 seconds... [17:00:30] post 19334909 has 163 comments (1) [17:00:30] post 19333600 has 152 comments (1) [17:00:30] post 19335363 has 14 comments (1) [17:00:30] post 19330812 has 346 comments (1) [17:00:30] post 19335853 has 1 comments (1) [17:00:30] > calculating comments took 2.31 seconds and 682 fetches [17:00:32] calculating comments for top 5 stories. (4) [17:00:32] waiting for 5 seconds... [17:00:33] post 19334909 has 163 comments (2) [17:00:33] post 19333600 has 152 comments (2) [17:00:33] post 19335363 has 14 comments (2) [17:00:33] post 19330812 has 346 comments (2) [17:00:33] post 19335853 has 1 comments (2) [17:00:33] > calculating comments took 0.80 seconds and 682 fetches [17:00:34] post 19334909 has 163 comments (3) [17:00:34] post 19333600 has 152 comments (3) [17:00:34] post 19335363 has 14 comments (3) [17:00:34] post 19330812 has 346 comments (3) [17:00:34] post 19335853 has 1 comments (3) [17:00:34] > calculating comments took 1.24 seconds and 682 fetches [17:00:37] calculating comments for top 5 stories. (5) [17:00:37] waiting for 5 seconds... [17:00:42] post 19334909 has 163 comments (5) [17:00:42] post 19333600 has 152 comments (5) [17:00:42] post 19335363 has 15 comments (5) [17:00:42] post 19330812 has 346 comments (5) [17:00:42] post 19335853 has 1 comments (5) [17:00:42] > calculating comments took 4.55 seconds and 683 fetches [17:00:42] calculating comments for top 5 stories. (6) [17:00:42] waiting for 5 seconds...
到这里为止,我们的代码基本已经改的可以了,我们的结果也终于达到了一个我们满意的结果。
四、小结
其实对我个人来说,在整理整理之前我自己对asyncio的用法也有很多地方理解的不清楚,也是摸着石头过河,碰到问题解决问题,在整理的过程中,其实对我自己来说很多之前模糊的地方也清晰了很多。
同时也欢迎大家也来分享自己使用python asyncio 的相关知识,欢迎加入群号:948510543
上一篇: 防ASP注入终极防范
下一篇: php语言注释,单行注释及多行注释