python爬虫框架Scrapy使用
程序员文章站
2022-05-06 20:57:58
...
安装
pip install -i https://pypi.tuna.tsinghua.edu.cn/simple scrapy
创建爬虫项目
scrapy startproject mypachong
项目结构
创建Spider
scrapy genspider quotes
处理文本内容
class QuotesSpider(scrapy.Spider):
name = 'quotes'
allowed_domains = ['book.zongheng.com']
start_urls = ['http://book.zongheng.com/showchapter/2313244.html']
def parse(self, response):
quotes = response.css('.chapter-list li')
for quote in quotes:
chapter = quote.css('a::attr(href)').extract_first()
url = response.urljoin(chapter)
yield scrapy.Request(url=url, callback=self.parse_content)
def parse_content(self, response):
book =response.css('.reader_crumb a::text').extract()[2]
chapter =response.css('.title_txtbox::text').extract_first()
quotes = response.css('.content')
for quote in quotes:
text = quote.css('p::text').extract()
item = MypachongItem()
item['text'] = text
item['book'] = book
item['chapter'] = chapter
yield item
class MypachongItem(scrapy.Item):
text = scrapy.Field()
book = scrapy.Field()
chapter = scrapy.Field()
随机浏览器头:
class RandomUA(object):
def process_request(self, request, spider):
ua = random.choice(USER_AGENTS)
request.headers.setdefault('User-Agent', ua)
USER_AGENTS = [
'Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_8; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50',
'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50',
'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0',
'Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0)',
'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)',
'Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1',
'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.101 Safari/537.36',
'Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; en) Presto/2.8.131 Version/11.11',
'Opera/9.80 (Windows NT 6.1; U; en) Presto/2.8.131 Version/11.11',
'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; The World)',
'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/30.0.1599.101 Safari/537.36',
'Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko'
]
处理抓取下来的数据:
class MypachongPipeline(object):
def __init__(self):
self.file = open('items.json', 'w')
def process_item(self, item, spider):
line = json.dumps(dict(item), ensure_ascii=False) + "\n"
self.file.write(line)
return item
启动爬虫
scrapy crawl quotes
推荐阅读
-
python爬虫scrapy运行ImportError:Nomodulenamedwin32api错误解决办法
-
使用Python的Tornado框架实现一个一对一聊天的程序
-
使用Python的Scrapy框架十分钟爬取美女图
-
使用Python的web.py框架实现类似Django的ORM查询的教程
-
在Python的Flask中使用WTForms表单框架的基础教程
-
使用Python多线程爬虫爬取电影天堂资源
-
使用Python多线程爬虫爬取电影天堂资源
-
Python抓取框架 Scrapy的架构
-
Python的Flask框架标配模板引擎Jinja2的使用教程
-
使用Python的Flask框架表单插件Flask-WTF实现Web登录验证