在scrapy中使用selenium
scrapy是個好工具,selenium也是一個好工具,但是兩者一結合,就不那么好了。因為往一個非阻塞程序中塞入一段阻塞的代碼,不能不令人抓狂。但即便如此,還是有不少需求需要在scrapy中使用selenium(往往是因為JavaScript搞不定)。既然如此,不妨來試一下怎樣更好的利用scraoy特性使用selenium。大概思路如下:
- 編寫專屬的
SeleniumRequest
類用來封裝selenium
的相關操作; - 編寫下載中間件,用于啟動瀏覽器,并根據
SeleniumReuqest
的相關屬性進行進一步操作。
OK,思路很清晰,接下來就擼起袖子干吧。
編寫SeleniumRequest
毫無疑問這個類要繼承自Scrapy.Reuqest
,同時我們希望這個類能保存一些屬性用于對瀏覽器的操作。大概如下:
- 首先是
wait_until
,用來保存瀏覽器等待到我們想要的條件加載出來為止; -
script
,用來保存js
腳本,用于在加載后執行該腳本; -
handler
,該屬性為一個函數,接收一個driver
參數,當網頁加載完成后調用它。
代碼如下:
class SeleniumRequest(scrapy.Request):
"""Selenium Request
:param wait_until: 等待條件
結構: {by: condition}
其中 by 的可指定類型可查看selenium.webdriver.common.by.By
如: By.ID, By.XPATH 等(僅支持指定條件出現)
:type wait_until: dict
:param wait_time: 等待時間
:type wait_time: int
:param script: 需要執行的js腳本
執行的結果會存儲到 meta 中,字段為 js_result
:param handler: 處理driver實例的函數
該函數不需要返回值
"""
def __init__(self, url, callback=None,
wait_until=None, wait_time=None,
script=None, handler=None, **kwargs):
self.wait_until = wait_until
self.script = script
self.wait_time = wait_time
self.handler = handler
super().__init__(url, callback, **kwargs)
到此請求類就寫完了,接下來開始寫下載中間件。
編寫下載中間件
下載中間件負責接收SeleniumReuqest
并實際調用瀏覽器和操作瀏覽器,最后將瀏覽器獲取到的網頁源碼封裝為HtmlResponse
返回。因此它要做的事相對多一點。下面一步步來寫:
- 第一步還是要先定義一下類,構造函數中我們需要一個項目設置實例,因為我們要從配置文件中獲取
Webdriver
的啟動路徑和其它設置信息(規定它必須被配置在scrapy項目的配置文件中,以保持使用上的統一),需要的設置分別為SELENIUM_DRIVER_PATH
和SELENIUM_HEADLESS
,分別表示路徑和是否顯示瀏覽器界面。
# 引入下面所有代碼需要的模塊和方法
import logging
from scrapy import signals
from scrapy.http import HtmlResponse
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
logger = logging.getLogger(__name__)
class SeleniumDownloadMiddleWare(object):
def __init__(self, settings):
driver_path = settings['SELENIUM_DRIVER_PATH']
headless = settings.getbool('SELENIUM_HEADLESS', True)
# 目前就只支持 Chrome 了
options = webdriver.ChromeOptions()
options.headless = headless
# User-Agent與項目配置保持一致
# 否則可能會導致在某些根據該請求頭設定cookies的網站上出現意想不到的情況
ua = settings['DEFAULT_REQUEST_HEADERS']['User-Agent']
options.add_argument(f'user-agent={ua}')
self._options = options
self._driver_path = driver_path
self._driver = None
- 接下來定義類方法
from_crawler
用來實例化類。在這里,還要綁定一個爬蟲結束的信號,以保證當爬蟲結束時測試瀏覽器被正常關閉。
@classmethod
def from_crawler(cls, crawler):
dm = cls(crawler.settings)
crawler.signals.connect(dm.close, signal=signals.spider_closed)
return dm
- 于是馬上就輪到
close
方法了:
def closed(self):
if self._driver is not None:
self._driver.quit()
logger.debug('Selenium closed')
- 寫一個
driver
屬性方便調用:
@property
def driver(self):
if self._driver is None:
self._driver = webdriver.Chrome(
executable_path=self._driver_path, options=self._options
)
return self._driver
- 終于來到了最后的環節,當然就是寫一個
process_request
方法了,我們將通過該方法處理SeleniumRequest
:
def process_request(self, request, spider):
if not isinstance(request, SeleniumRequest):
return
self.driver.get(request.url)
# 處理等待條件
if request.wait_until:
for k, v in request.wait_until.items():
condition = EC.presence_of_element_located((k, v))
WebDriverWait(self.driver, request.wait_time).until(
condition
)
# 處理js腳本
if request.script:
result = self.driver.execute_script(request.script)
if result is not None:
request.meta['js_result'] = result
# 調用處理函數
if request.handler is not None:
request.handler(self.driver)
# 傳遞Cookies
for cookie_name, cookie_value in request.cookies.items():
self.driver.add_cookie(
{
'name': cookie_name,
'value': cookie_value
}
)
request.cookies = self.driver.get_cookies()
request.meta['browser'] = self.driver
# 返回 Response對象
body = str.encode(self.driver.page_source)
return HtmlResponse(
self.driver.current_url,
body=body,
encoding='utf-8',
request=request
)
到此就寫完了,接下來在項目的配置中配置該中間件就可以使用了。完整代碼如下:
import logging
from scrapy import signals
from scrapy.http import HtmlResponse
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
# 這里需要修改為SeleniumRequest的定義處
from utils.selenium import SeleniumRequest
logger = logging.getLogger(__name__)
class SeleniumDownloadMiddleWare(object):
def __init__(self, settings):
driver_path = settings['SELENIUM_DRIVER_PATH']
headless = settings.getbool('SELENIUM_HEADLESS', True)
ua = settings['DEFAULT_REQUEST_HEADERS']['User-Agent']
options = webdriver.ChromeOptions()
options.headless = headless
options.add_argument(f'user-agent={ua}')
self._options = options
self._driver_path = driver_path
self._driver = None
@property
def driver(self):
if self._driver is None:
self._driver = webdriver.Chrome(
executable_path=self._driver_path, options=self._options
)
return self._driver
@classmethod
def from_crawler(cls, crawler):
dm = cls(crawler.settings)
crawler.signals.connect(dm.close, signal=signals.spider_closed)
return dm
def process_request(self, request, spider):
if not isinstance(request, SeleniumRequest):
return
self.driver.get(request.url)
# 處理等待條件
if request.wait_until:
for k, v in request.wait_until.items():
condition = EC.presence_of_element_located((k, v))
WebDriverWait(self.driver, request.wait_time).until(
condition
)
# 處理js腳本
if request.script:
result = self.driver.execute_script(request.script)
if result is not None:
request.meta['js_result'] = result
# 調用處理函數
if request.handler is not None:
request.handler(self.driver)
# 傳遞Cookies
for cookie_name, cookie_value in request.cookies.items():
self.driver.add_cookie(
{
'name': cookie_name,
'value': cookie_value
}
)
request.cookies = self.driver.get_cookies()
request.meta['browser'] = self.driver
# 返回 Response對象
body = str.encode(self.driver.page_source)
return HtmlResponse(
self.driver.current_url,
body=body,
encoding='utf-8',
request=request
)
def close(self):
if self._driver is not None:
self._driver.quit()
logger.debug('Selenium closed')