Python Ajax爬虫案例分享
程序员文章站
2022-03-05 15:32:36
目录1. 抓取街拍图片2. 分析街拍图片结构3. 按功能不同编写不同方法组织代码3.1 获取网页json格式数据3.2 从json格式数据提取街拍图片3.3 将街拍图片以其md5码命名并保存图片3.4...
1. 抓取街拍图片
2. 分析街拍图片结构
keyword: 街拍 pd: atlas dvpf: pc aid: 4916 page_num: 1 search_json: {"from_search_id":"20220104115420010212192151532e8188","origin_keyword":"街拍","image_keyword":"街拍"} rawjson: 1 search_id: 202201041159040101501341671a4749c4
可以找到规律,page_num从1
开始累加,其他参数不变
3. 按功能不同编写不同方法组织代码
3.1 获取网页json格式数据
def get_page(page_num): global headers headers = { 'host': 'so.toutiao.com', #'referer': 'https://so.toutiao.com/search?keyword=%e8%a1%97%e6%8b%8d&pd=atlas&dvpf=pc&aid=4916&page_num=0&search_json={%22from_search_id%22:%22202112272022060101510440283ee83d67%22,%22origin_keyword%22:%22%e8%a1%97%e6%8b%8d%22,%22image_keyword%22:%22%e8%a1%97%e6%8b%8d%22}', 'user-agent': 'mozilla/5.0 (windows nt 10.0; wow64) applewebkit/537.36 (khtml, like gecko) chrome/86.0.4240.198 safari/537.36', 'x-requested-with': 'xmlhttprequest', 'cookie': 'mstoken=s0dfbkz9hmylogyd3_qjhhxgrm38qtyoitnknb0t_oavfbvxuyv1jz0tt5hlgswsfmzlfd6c2lonm_5tomuqxvxjen7cixm2agwbhhrykjhg; _s_dpr=1.5; _s_ipad=0; monitor_web_id=7046351002275317255; ttwid=1%7c0ydwalndiispik3cvvhwv25u8drq3qaj08e8qoapxhs%7c1640607595%7c720e971d353416921df127996ed708931b4ae28a0a8691a5466347697e581ce8; _s_win_wh=262_623' } params = { 'keyword': '街拍', 'pd': 'atlas', 'dvpf': 'pc', 'aid': '4916', 'page_num': page_num, 'search_json': '%7b%22from_search_id%22%3a%22202112272022060101510440283ee83d67%22%2c%22origin_keyword%22%3a%22%e8%a1%97%e6%8b%8d%22%2c%22image_keyword%22%3a%22%e8%a1%97%e6%8b%8d%22%7d', 'rawjson': 1, 'search_id': '2021122721183101015104402851e3883d' } url = 'https://so.toutiao.com/search?' + urlencode(params) print(url) try: response=requests.get(url,headers=headers,params=params) if response.status_code == 200: #if response.content: #print(response.json()) return response.json() except requests.connectionerror: return none
3.2 从json格式数据提取街拍图片
def get_images(json): images = json.get('rawdata').get('data') for image in images: link = image.get('img_url') yield link
3.3 将街拍图片以其md5码命名并保存图片
实现一个保存图片的方法save_image()
,其中 item 就是前面 get_images() 方法返回的一个字典。在该方法中,首先根据 item
的 title 来创建文件夹,然后请求这个图片链接,获取图片的二进制数据,以二进制的形式写入文件。图片的名称可以使用其内容的 md5 值,这样可以去除重复。相关
代码如下:
def save_image(link): data = requests.get(link).content with open(f'./image/{md5(data).hexdigest()}.jpg', 'wb')as f:#使用data的md5码作为图片名 f.write(data)
3.4 main()调用其他函数
def main(page_num): json = get_page(page_num) for link in get_images(json): #print(link) save_image(link)
4 抓取20page今日头条街拍图片数据
这里定义了分页的起始页数和终止页数,分别为group_start
和 group_end
,还利用了多线程的线程池,调用其 map() 方法实现程下载。
if __name__ == '__main__': group_start = 1 group_end = 20 pool = pool() groups = ([x for x in range(group_start, group_end + 1)]) #print(groups) pool.map(main, groups) pool.close() pool.join()
import requests from urllib.parse import urlencode from hashlib import md5 from multiprocessing.pool import pool def get_page(page_num): global headers headers = { 'host': 'so.toutiao.com', #'referer': 'https://so.toutiao.com/search?keyword=%e8%a1%97%e6%8b%8d&pd=atlas&dvpf=pc&aid=4916&page_num=0&search_json={%22from_search_id%22:%22202112272022060101510440283ee83d67%22,%22origin_keyword%22:%22%e8%a1%97%e6%8b%8d%22,%22image_keyword%22:%22%e8%a1%97%e6%8b%8d%22}', 'user-agent': 'mozilla/5.0 (windows nt 10.0; wow64) applewebkit/537.36 (khtml, like gecko) chrome/86.0.4240.198 safari/537.36', 'x-requested-with': 'xmlhttprequest', 'cookie': 'mstoken=s0dfbkz9hmylogyd3_qjhhxgrm38qtyoitnknb0t_oavfbvxuyv1jz0tt5hlgswsfmzlfd6c2lonm_5tomuqxvxjen7cixm2agwbhhrykjhg; _s_dpr=1.5; _s_ipad=0; monitor_web_id=7046351002275317255; ttwid=1%7c0ydwalndiispik3cvvhwv25u8drq3qaj08e8qoapxhs%7c1640607595%7c720e971d353416921df127996ed708931b4ae28a0a8691a5466347697e581ce8; _s_win_wh=262_623' } params = { 'keyword': '街拍', 'pd': 'atlas', 'dvpf': 'pc', 'aid': '4916', 'page_num': page_num, 'search_json': '%7b%22from_search_id%22%3a%22202112272022060101510440283ee83d67%22%2c%22origin_keyword%22%3a%22%e8%a1%97%e6%8b%8d%22%2c%22image_keyword%22%3a%22%e8%a1%97%e6%8b%8d%22%7d', 'rawjson': 1, 'search_id': '2021122721183101015104402851e3883d' } url = 'https://so.toutiao.com/search?' + urlencode(params) print(url) try: response=requests.get(url,headers=headers,params=params) if response.status_code == 200: #if response.content: #print(response.json()) return response.json() except requests.connectionerror: return none def get_images(json): images = json.get('rawdata').get('data') for image in images: link = image.get('img_url') yield link def save_image(link): data = requests.get(link).content with open(f'./image/{md5(data).hexdigest()}.jpg', 'wb')as f:#使用data的md5码作为图片名 f.write(data) def main(page_num): json = get_page(page_num) for link in get_images(json): #print(link) save_image(link) if __name__ == '__main__': group_start = 1 group_end = 20 pool = pool() groups = ([x for x in range(group_start, group_end + 1)]) #print(groups) pool.map(main, groups) pool.close() pool.join()
到此这篇关于python ajax爬虫案例分享的文章就介绍到这了,更多相关python ajax爬虫内容请搜索以前的文章或继续浏览下面的相关文章希望大家以后多多支持!