webdriver反爬
程序员文章站
2022-07-08 15:49:29
'''webdriver反爬虫通过selenum操作会被浏览器识别是否为webdriver=true识别出来即判断为爬虫案例一:http://www.porters.vip/features/webdriver.html案例二:淘宝登录中https://login.taobao.com/member/login.jhtmlindex.js中存在如下验证:function r() { return "$cdc_asdjflasutopfhvcZLmcfl_"in u |....
'''
webdriver反爬虫
通过selenum操作会被浏览器识别是否为webdriver=true
识别出来即判断为爬虫
案例一:http://www.porters.vip/features/webdriver.html
案例二:淘宝登录中https://login.taobao.com/member/login.jhtml
index.js中存在如下验证:
function r() {
return "$cdc_asdjflasutopfhvcZLmcfl_"in u || f.webdriver
}
解决方法:
方法一、修改navigator.webdriver里的js
script = 'Object.defineProperty(navigator,"webdriver",{get:() => false,})'
browser.execute_script(script)
方法二、通过mitmproxy代理绕过
'''
import asyncio
import time
from pyppeteer import launch
from selenium.webdriver import Chrome
class WebdriverSpider:
@classmethod
def get_webdriver_spider(cls):
return cls()
def chrome_driver(self):
browser = Chrome(r'E:\tool\chromedriver_win32\chromedriver.exe')
browser.get('http://www.porters.vip/features/webdriver.html')
# 执行一段js,设置navigator的webdriver
script = 'Object.defineProperty(navigator,"webdriver",{get:() => false,})'
browser.execute_script(script)
browser.find_element_by_xpath("//button[@class='btn btn-primary btn-lg']").click()
elements = browser.find_element_by_xpath("//div[@class='modal-content']")
print(elements.text)
time.sleep(10)
browser.close()
def run(self):
self.chrome_driver()
async def main():
browser = await launch()
page = await browser.newPage()
await page.goto('http://www.porters.vip/features/webdriver.html')
await page.click('.btn.btn-primary.btn-lg')
await asyncio.sleep(1)
await page.screenshot({'path': 'webdriver.png'})
await browser.close()
if __name__ == '__main__':
WebdriverSpider.get_webdriver_spider().run()
# asyncio.get_event_loop().run_until_complete(main())
--反爬虫原理与绕过
本文地址:https://blog.csdn.net/weixin_42670402/article/details/110852273
下一篇: 04.决策树介绍