欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页  >  IT编程

python使用rabbitmq实现网络爬虫示例

程序员文章站 2024-02-02 10:38:34
编写tasks.py复制代码 代码如下:from celery import celeryfrom tornado.httpclient import httpclient...

编写tasks.py

复制代码 代码如下:

from celery import celery
from tornado.httpclient import httpclient
app = celery('tasks')
app.config_from_object('celeryconfig')
@app.task
def get_html(url):
    http_client = httpclient()
    try:
        response = http_client.fetch(url,follow_redirects=true)
        return response.body
    except httpclient.httperror as e:
        return none
    http_client.close()

编写celeryconfig.py

复制代码 代码如下:

celery_imports = ('tasks',)
broker_url = 'amqp://guest@localhost:5672//'
celery_result_backend = 'amqp://'

编写spider.py

复制代码 代码如下:

from tasks import get_html
from queue import queue
from bs4 import beautifulsoup
from urllib.parse import urlparse,urljoin
import threading
class spider(object):
    def __init__(self):
        self.visited={}
        self.queue=queue()
    def process_html(self, html):
        pass
        #print(html)
    def _add_links_to_queue(self,url_base,html):
        soup = beautifulsoup(html)
        links=soup.find_all('a')
        for link in links:
            try:
                url=link['href']
            except:
                pass
            else:
                url_com=urlparse(url)
                if not url_com.netloc:
                    self.queue.put(urljoin(url_base,url))
                else:
                    self.queue.put(url_com.geturl())
    def start(self,url):
        self.queue.put(url)
        for i in range(20):
            t = threading.thread(target=self._worker)
            t.daemon = true
            t.start()
        self.queue.join()
    def _worker(self):
        while 1:
            url=self.queue.get()
            if url in self.visited:
                continue
            else:
                result=get_html.delay(url)
                try:
                    html=result.get(timeout=5)
                except exception as e:
                    print(url)
                    print(e)
                self.process_html(html)
                self._add_links_to_queue(url,html)

                self.visited[url]=true
                self.queue.task_done()
s=spider()
s.start("//www.jb51.net/")

由于html中某些特殊情况的存在,程序还有待完善。