Python抓包并解析json爬虫的完整实例代码
程序员文章站
2022-06-25 11:38:56
python抓包并解析json爬虫在使用python爬虫的时候,通过抓包url,打开url可能会遇见以下类似网址,打开后会出现类似这样的界面,无法继续进行爬虫:例如:需要爬取网页中第二页的数据时,点击...
python抓包并解析json爬虫
在使用python爬虫的时候,通过抓包url,打开url可能会遇见以下类似网址,打开后会出现类似这样的界面,无法继续进行爬虫:
例如:
需要爬取网页中第二页的数据时,点击f12➡网络(network)➡xhr,最好点击清除键,如下图:
通过点击“第二页”,会出现一个post请求(有时会是get请求),点击post请求的url,(这里网址以post请求为例),
如图:
然后复制参数代码
代码展示:
import requests import json url = 'https://m.ctrip.com/restapi/soa2/13444/json/getcommentcollapselist?_fxpcqlniredt=09031130211378497389' header={ 'authority': 'm.ctrip.com', 'method': 'post', 'path': '/restapi/soa2/13444/json/getcommentcollapselist?_fxpcqlniredt=09031130211378497389', 'scheme': 'https', 'accept': '*/*', 'accept-encoding': 'gzip, deflate, br', 'accept-language': 'zh-cn,zh;q=0.9', 'cache-control': 'no-cache', 'content-length': '278', 'content-type': 'application/json', 'cookie': '__utma=1.1986366783.1601607319.1601607319.1601607319.1; __utmz=1.1601607319.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); _rsg=blqd1d4mgx0ba_ampd3t29; _rdg=286710759c35f221c000cbec6169743cac; _rguid=0850c049-c137-4be5-90b7-0cd67093f28b; mkt_ckid=1601607321903.rzptk.lbzh; _ga=ga1.2.1986366783.1601607319; nfes_issupportwebp=1; appfloatcnt=8; _gcl_dc=gcl.1601638857.ckzg58xqlewcfqitvaodioijww; session=smartlinkcode=u155952&smartlinkkeyword=&smartlinkquary=&smartlinkhost=&smartlinklanguage=zh; union=ouid=index&allianceid=4897&sid=155952&sourceid=&createtime=1602506741&expires=1603111540922; mkt_orderclick=asid=4897155952&aid=4897&csid=155952&ouid=index&ct=1602506740926&curl=https%3a%2f%2fwww.ctrip.com%2f%3fsid%3d155952%26allianceid%3d4897%26ouid%3dindex&val={"pc_vid":"1601607319353.3cid9z"}; mkt_pagesource=pc; _rf1=218.58.59.72; _bfa=1.1601607319353.3cid9z.1.1602506738089.1602680023977.4.25; _bfi=p1%3d290510%26p2%3d290510%26v1%3d25%26v2%3d24; mkt_ckid_lmt=1602680029515; __zpspc=9.5.1602680029.1602680029.1%232%7cwww.baidu.com%7c%7c%7c%25e6%2590%25ba%25e7%25a8%258b%7c%23; _gid=ga1.2.1363667416.1602680030; _jzqco=%7c%7c%7c%7c1602680029668%7c1.672451398.1601607321899.1602506755440.1602680029526.1602506755440.1602680029526.undefined.0.0.16.16', 'cookieorigin': 'https://you.ctrip.com', 'origin': 'https://you.ctrip.com', 'pragma': 'no-cache', 'referer': 'https://you.ctrip.com/', 'sec-fetch-dest': 'empty', 'sec-fetch-mode': 'cors', 'sec-fetch-site': 'same-site', 'user-agent': 'mozilla/5.0 (windows nt 10.0; win64; x64) applewebkit/537.36 (khtml, like gecko) chrome/85.0.4183.121 safari/537.36' } dat = { "arg": { 'channeltype': 2, 'collapsetype': 0, 'commenttagid': 0, 'pageindex': 1, 'pagesize': 10, 'poiid': 75648, 'sorttype': 3, 'sourcetype': 1, 'startype': 0 }, "head": { 'auth': "", 'cid': "09031117213661657011", 'ctok': "", 'cver': "1.0", 'extension': [], 'lang': "01", 'sid': "8888", 'syscode': "09", 'xsid': "" } } r = requests.post(url, data=json.dumps(dat), headers=header) s = r.json() print(s)
运行结果:
然后右击结果,再点击show as json:
最后就会出现目标url的响应信息,就可以进行爬取了!!!
总结
到此这篇关于python抓包并解析json爬虫的文章就介绍到这了,更多相关python抓包并解析json爬虫内容请搜索以前的文章或继续浏览下面的相关文章希望大家以后多多支持!