欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

150行代码写爬虫(二)

程序员文章站 2022-06-17 09:29:07
...

上篇内容:http://dushen.iteye.com/blog/2415336

项目地址:https://gitee.com/dushen666/spider.git

 

继续上一篇的内容,在上一篇的时候,我们已经可以将数据爬取下来了,并保存为了json文件的形式。本篇我要将数据插入关系型数据库,并实现去重。

此处以MySQL数据库为例:

  1. 我们依照上一篇的items创建表结构:
    /*
    SQLyog Ultimate v10.42 
    MySQL - 5.7.20-0ubuntu0.16.04.1 : Database - movie-website1
    *********************************************************************
    */
    
    
    /*!40101 SET NAMES utf8 */;
    
    /*!40101 SET SQL_MODE=''*/;
    
    /*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */;
    /*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0 */;
    /*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */;
    /*!40111 SET @OLD_SQL_NOTES=@@SQL_NOTES, SQL_NOTES=0 */;
    /*Table structure for table `spider_h6080` */
    
    CREATE TABLE `spider_h6080` (
      `name` VARCHAR(1000) DEFAULT NULL,
      `url` VARCHAR(1000) DEFAULT NULL,
      `num` VARCHAR(1000) DEFAULT NULL,
      FULLTEXT KEY `spiderh6080index` (`name`,`url`)
    ) ENGINE=INNODB DEFAULT CHARSET=utf8;
    
    /*Table structure for table `spider_h6080_movieinfo` */
    
    CREATE TABLE `spider_h6080_movieinfo` (
      `id` INT(11) NOT NULL AUTO_INCREMENT,
      `moviename` VARCHAR(100) DEFAULT NULL,
      `prefilename` VARCHAR(50) DEFAULT NULL,
      `suffixname` VARCHAR(20) DEFAULT NULL,
      `createtime` DATETIME(6) DEFAULT NULL,
      `updatetime` DATETIME(6) DEFAULT NULL,
      `publishtime` VARCHAR(100) DEFAULT NULL,
      `types` VARCHAR(200) DEFAULT NULL,
      `area` VARCHAR(200) DEFAULT NULL,
      `language` VARCHAR(200) DEFAULT NULL,
      `actor` VARCHAR(200) DEFAULT NULL,
      `director` VARCHAR(200) DEFAULT NULL,
      `keyword` VARCHAR(200) DEFAULT NULL,
      `weight` INT(11) DEFAULT NULL,
      `countnumber` INT(11) DEFAULT NULL,
      `avaliblesum` INT(11) DEFAULT NULL,
      `introduce` VARCHAR(2000) DEFAULT NULL,
      `clickcount` INT(11) DEFAULT NULL,
      `playcount` INT(11) DEFAULT NULL,
      `duration` VARCHAR(100) DEFAULT NULL,
      `isoutsource` VARCHAR(2) DEFAULT NULL,
      `picurl` VARCHAR(500) DEFAULT NULL,
      `classify_id` INT(11) DEFAULT NULL,
      PRIMARY KEY (`id`),
      FULLTEXT KEY `moviename` (`moviename`)
    ) ENGINE=INNODB AUTO_INCREMENT=13748 DEFAULT CHARSET=utf8;
    
    /*!40101 SET SQL_MODE=@OLD_SQL_MODE */;
    /*!40014 SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS */;
    /*!40014 SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS */;
    /*!40111 SET SQL_NOTES=@OLD_SQL_NOTES */;
    
     
  2. 安装MySQL-python,用于连接我们的MySQL数据库:

     ubuntu安装步骤如下:

    sudo apt-get install libmysqlclient-dev libmysqld-dev python-dev python-setuptools
    pip install MySQL-python
     

     windows,下载附件中的MySQL-python-1.2.5.win-amd64-py2.7.exe,双击安装。

     

     验证,在python交互界面输入import MySQLdb,若无报错,说明成功:

    C:\Users\du>python
    Python 2.7.10 (default, May 23 2015, 09:44:00) [MSC v.1500 64 bit (AMD64)] on wi
    n32
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import MySQLdb
    >>>

     

  3. 安装pybloom,用于去重。
    pip install pybloom
     说明:pybloom,布隆过滤器(Bloom Filter)是由布隆(Burton Howard Bloom)在1970年提出的。它实际上是由一个很长的二进制向量和一系列随机映射函数组成,布隆过滤器可以用于检索一个元素是否在一个集合中。

     我们去重通常有三种方法:
    1、将过滤字段存于一个list中,针对每个抓取到的item,检测过滤字段是否在list中,若存在则drop。否则入库。但此方法随着list的不断增大,程序占用内存越来越大,造成性能下降,甚至触发系统的OOMKiller,中断进程。

    ids_seen=[]
    if item['url'] in self.ids_seen:
          raise DropItem("Exist Exception! Duplicate item found: %s" % item['name'])
    else:
          sql = ("INSERT INTO spider_h6080 (NAME,url,num) VALUES ('%s', '%s', '%s')" % (item['name'], item['url'], item['num']))
          try:
                self.cur.execute(sql)
                self.conn.commit()
                self.append(item['url'])
          except Exception as err:
                raise DropItem("DB Exception! Duplicate item found: %s" % err)
                return item

     2、在数据库中添加主键,利用数据库返回错误,但数据库大量报错可能导致程序崩溃。

    sql = ("INSERT INTO spider_h6080 (NAME,url,num) VALUES ('%s', '%s', '%s')" % (item['name'], item['url'], item['num']))
    try:
    	self.cur.execute(sql)
    	self.conn.commit()
    except Exception as err:
    	raise DropItem("%s" % err)
    return item

     3、使用布隆过滤器,布隆过滤器的特点是利用准确率换取内存空间和效率,虽然程序的内存也会不断增大,但是比起list要好的太多,内存增幅极小。

    from pybloom import BloomFilter
    
    ids_seen = BloomFilter(capacity=5000000, error_rate=0.001)
    
    if item['url'] in self.ids_seen:
    	raise DropItem("Exist Exception! Duplicate item found: %s" % item['name'])
    else:
    	sql = ("INSERT INTO spider_h6080 (NAME,url,num) VALUES ('%s', '%s', '%s')" % (item['name'], item['url'], item['num']))
    	try:
    		self.cur.execute(sql)
    		self.conn.commit()
    		self.ids_seen.add(item['url'])
    	except Exception as err:
    		raise DropItem("DB Exception! Duplicate item found: %s" % err)
    	return item

     

  4. pipeline.py的代码如下:
    # -*- coding: utf-8 -*-
    
    # Define your item pipelines here
    #
    # Don't forget to add your pipeline to the ITEM_PIPELINES setting
    # See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html
    import sys
    reload(sys)
    sys.setdefaultencoding('utf-8')
    from scrapy.exceptions import DropItem
    from spider.items import H6080Item, H6080MovieInfo
    import MySQLdb
    from pybloom import BloomFilter
    
    
    class Spider1Pipeline(object):
        def __init__(self):
            #self.ids_seen = []
            self.ids_seen = BloomFilter(capacity=5000000, error_rate=0.001)
            #self.movienames_seen = []
            self.movienames_seen = BloomFilter(capacity=5000000, error_rate=0.001)
            self.conn = MySQLdb.connect(host='127.0.0.1', port=3306, user='root', passwd='ROOT', db='movie-website1', charset='utf8')
            self.cur = self.conn.cursor()
            self.cur.execute('SELECT url FROM spider_h6080')
            result = self.cur.fetchall()
            for row in result:
                self.ids_seen.add(row[0])
            self.cur.execute('SELECT picurl FROM spider_h6080_movieinfo')
            result = self.cur.fetchall()
            for row in result:
                self.movienames_seen.add(row[0])
    
        def process_item(self, item, spider):
            print "Deep of ids_seen: %s" % self.ids_seen.count
            print "Deep of movienames_seen: %s" % self.movienames_seen.count
            if isinstance(item, H6080Item):
                if item['url'] in self.ids_seen:
                    raise DropItem("Exist Exception! Duplicate item found: %s" % item['name'])
                else:
                    sql = ("INSERT INTO spider_h6080 (NAME,url,num) VALUES ('%s', '%s', '%s')" % (item['name'], item['url'], item['num']))
                    try:
                        self.cur.execute(sql)
                        self.conn.commit()
                        self.ids_seen.add(item['url'])
                    except Exception as err:
                        raise DropItem("DB Exception! Duplicate item found: %s" % err)
                    return item
            elif isinstance(item, H6080MovieInfo):
                if item['picurl'] in self.movienames_seen:
                    raise DropItem("Exist Exception! Duplicate item found: %s" % item['name'])
                sql = "INSERT INTO spider_h6080_movieinfo (moviename,actor,TYPES,AREA,publishtime,countnumber,introduce,director,picurl) VALUES ('%s', '%s', '%s', '%s','%s', '%s', '%s', '%s', '%s')" % (item['name'], item['actor'], item['types'], item['area'], item['publishtime'], item['countnumber'], item['introduce'], item['director'], item['picurl'])
                try:
                    self.cur.execute(sql)
                    self.conn.commit()
                    self.movienames_seen.add(item['picurl'])
                except Exception as err:
                    raise DropItem("DB Exception! Duplicate item found: %s" % err)
                return item
    
     此处两个item分别以picurl和url作为去重过滤字段,声明两个bloomfilter用与过滤这两个字段。BloomFilter(capacity=5000000, error_rate=0.001),capacity为过滤器可存储的最大数据量,error_rate为可容忍的最大错误率

到此,爬虫已经实现的数据入库和去重功能,我们可以用它来爬取视频了。

 

最后再次附上项目地址:https://gitee.com/dushen666/spider.git