scrapy下載中間件

scrapy提供了兩種中間件,下載中間件(Downloader Middleware)和Spider中間件(Spider Middleware)

下載中間件

下載中間件是scrapy提供用于用于在爬蟲(chóng)過(guò)程中可修改Request和Response,用于擴(kuò)展scrapy的功能;比如:

  1. 可以在請(qǐng)求被Download之前,請(qǐng)求頭部加上某些信息;
  2. 完成請(qǐng)求之后,回包需要解壓等處理;
如何激活下載中間:

在配置文件settings.py中的DOWNLOADER_MIDDLEWARES中配置鍵值對(duì),鍵為要打開(kāi)的中間件,值為數(shù)字,代表優(yōu)先級(jí),值越低,優(yōu)先級(jí)越高。
scrapy還有一個(gè)內(nèi)部自帶的下載中間件配置DOWNLOADER_MIDDLEWARES_BASE(不可覆蓋)。scrapy在啟用是會(huì)結(jié)合DOWNLOADER_MIDDLEWARES_BASEDOWNLOADER_MIDDLEWARES,若要取消scrapy默認(rèn)打開(kāi)的中間,可在DOWNLOADER_MIDDLEWARES將該中間的值置為0。

DOWNLOADER_MIDDLEWARES_BASE = 
{
    'scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware': 100,
    'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware': 300,
    'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware': 350,
    'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware': 400,
    'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': 500,
    'scrapy.downloadermiddlewares.retry.RetryMiddleware': 550,
    'scrapy.downloadermiddlewares.ajaxcrawl.AjaxCrawlMiddleware': 560,
    'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware': 580,
    'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 590,
    'scrapy.downloadermiddlewares.redirect.RedirectMiddleware': 600,
    'scrapy.downloadermiddlewares.cookies.CookiesMiddleware': 700,
    'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 750,
    'scrapy.downloadermiddlewares.stats.DownloaderStats': 850,
    'scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware': 900,
}
如何編寫(xiě)一個(gè)下載中間件
class Scrapy.downloadermiddleares.DownloaderMiddleware
process_request(request, spider)

當(dāng)每個(gè)Request對(duì)象經(jīng)過(guò)下載中間件時(shí)會(huì)被調(diào)用,優(yōu)先級(jí)越高的中間件,越先調(diào)用;該方法應(yīng)該返回以下對(duì)象:None/Response對(duì)象/Request對(duì)象/拋出IgnoreRequest異常;

  1. 返回None:scrapy會(huì)繼續(xù)執(zhí)行其他中間件相應(yīng)的方法;
  2. 返回Response對(duì)象:scrapy不會(huì)再調(diào)用其他中間件的process_request方法,也不會(huì)去發(fā)起下載,而是直接返回該Response對(duì)象;
  3. 返回Request對(duì)象:scrapy不會(huì)再調(diào)用其他中間件的process_request()方法,而是將其放置調(diào)度器待調(diào)度下載;
  4. 拋出IgnoreRequest異常:已安裝中間件的process_exception()會(huì)被調(diào)用,如果它們沒(méi)有捕獲該異常,則Request.errback會(huì)被調(diào)用;如果再?zèng)]被處理,它會(huì)被忽略,且不會(huì)寫(xiě)進(jìn)日志。
process_response(request, response, spider)

當(dāng)每個(gè)Response經(jīng)過(guò)下載中間件會(huì)被調(diào)用,優(yōu)先級(jí)越高的中間件,越晚被調(diào)用,與process_request()相反;該方法返回以下對(duì)象:Response對(duì)象/Request對(duì)象/拋出IgnoreRequest異常。

  1. 返回Response對(duì)象:scrapy會(huì)繼續(xù)調(diào)用其他中間件的process_response方法;
  2. 返回Request對(duì)象:停止中間器調(diào)用,將其放置到調(diào)度器待調(diào)度下載;
  3. 拋出IgnoreRequest異常:Request.errback會(huì)被調(diào)用來(lái)處理函數(shù),如果沒(méi)有處理,它將會(huì)被忽略且不會(huì)寫(xiě)進(jìn)日志。
process_exception(request, exception, spider)

當(dāng)process_exception()process_request()拋出異常時(shí)會(huì)被調(diào)用,應(yīng)該返回以下對(duì)象:None/Response對(duì)象/Request對(duì)象;

  1. 如果返回None:scrapy會(huì)繼續(xù)調(diào)用其他中間件的process_exception();
  2. 如果返回Response對(duì)象:中間件鏈的process_response()開(kāi)始啟動(dòng),不會(huì)繼續(xù)調(diào)用其他中間件的process_exception()
  3. 如果返回Request對(duì)象:停止中間器的process_exception()方法調(diào)用,將其放置到調(diào)度器待調(diào)度下載。
from_crawler(cls, crawler)

如果存在該函數(shù),from_crawler會(huì)被調(diào)用使用crawler來(lái)創(chuàng)建中間器對(duì)象,必須返回一個(gè)中間器對(duì)象,通過(guò)這種方式,可以訪問(wèn)到crawler的所有核心部件,如settingssignals等。

scray提供的一些下載中間件

以下講述的是一些常用的下載中間件,更多的下載中間件請(qǐng)查看文檔和代碼

HttpProxyMiddleware

scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware

用以設(shè)置代理服務(wù)器,通過(guò)設(shè)定Request.meta['proxy']來(lái)設(shè)置代理,會(huì)從環(huán)境變量http_proxy、https_proxy、no_proxy依次獲取;我們以http://httpbin.org/ip的返回來(lái)測(cè)試下:

#shell 命令
export http_proxy='http://193.112.216.55:1234'
# -*- coding: utf-8 -*-
import scrapy

class ProxySpider(scrapy.Spider):
    name = 'proxy'
    allowed_domains = ['httpbin.org']
    start_urls = ['http://httpbin.org/ip']

    def parse(self, response):
        print(response.text)

運(yùn)行scrapy crawl proxy --nolog,獲得以下結(jié)果:

{"origin":"111.231.115.150, 193.112.216.55"}

返回了我們?cè)O(shè)置的代理地址IP。

UserAgentMiddleware

scrapy.downloadermiddlewares.useragent.UserAgentMiddleware

通過(guò)配置項(xiàng)USER_AGENT設(shè)置用戶代理;我們以http://httpbin.org/headers的返回來(lái)看看測(cè)試下:

settings.py
#...
#UserAgentMiddleware默認(rèn)打開(kāi)
USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36'
#...
# -*- coding: utf-8 -*-
import scrapy

class UserAgentSpider(scrapy.Spider):
    name = 'user_agent'
    allowed_domains = ['httpbin.org']
    start_urls = ['http://httpbin.org/headers']

    def parse(self, response):
        print(response.text)

運(yùn)行scrapy crawl user_agent --nolog,獲得以下結(jié)果:

{"headers":{"Accept":"text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8","Accept-Encoding":"gzip,deflate","Accept-Language":"en","Connection":"close","Host":"httpbin.org","User-Agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36"}}

返回了我們?cè)O(shè)置的用戶代理。

使用隨機(jī)用戶代理與IP代理

某些網(wǎng)站會(huì)通過(guò)檢測(cè)訪問(wèn)IPUser-agent來(lái)進(jìn)行反爬蟲(chóng),如果檢測(cè)到來(lái)自同一IP的大量請(qǐng)求,會(huì)判斷該IP正在進(jìn)行爬蟲(chóng),故而拒絕請(qǐng)求。有些網(wǎng)站也會(huì)檢測(cè)User-Agent。我們可以使用多個(gè)代理IP和不同的User-Agent來(lái)對(duì)網(wǎng)站數(shù)據(jù)進(jìn)行爬取,避免被封禁IP。
我們可以通過(guò)繼承HttpProxyMiddlewareUserAgentMiddleware并修改來(lái)使得scrapy使用proxy和user-agent按我們的想法來(lái)運(yùn)行。HttpProxyMiddlewareUserAgentMiddleware見(jiàn)httpproxy.pyuseragent.py

代碼如下:

#middlewares.py
# -*- coding: utf-8 -*-

# Define here the models for your spider middleware
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html

from scrapy import signals
from scrapy.downloadermiddlewares.httpproxy import HttpProxyMiddleware
from scrapy.exceptions import NotConfigured
from collections import defaultdict
from urllib.parse import urlparse
from faker import Faker                 #引入Faker,pip install faker下載
import random

class RandomHttpProxyMiddleware(HttpProxyMiddleware):

    def __init__(self, auth_encoding='latin-1', proxy_list = None):
        if not proxy_list:
            raise NotConfigured
        self.proxies = defaultdict(list)
        for proxy in proxy_list:
            parse = urlparse(proxy)
            self.proxies[parse.scheme].append(proxy)        #生成dict,鍵為協(xié)議,值為代理ip列表

    @classmethod
    def from_crawler(cls, crawler):
        if not crawler.settings.get('HTTP_PROXY_LIST'):
            raise NotConfigured

        http_proxy_list = crawler.settings.get('HTTP_PROXY_LIST')   #從配置文件中讀取
        auth_encoding = crawler.settings.get('HTTPPROXY_AUTH_ENCODING', 'latin-1')

        return cls(auth_encoding, http_proxy_list)

    def _set_proxy(self, request, scheme):
        proxy = random.choice(self.proxies[scheme])     #隨機(jī)抽取選中協(xié)議的IP
        request.meta['proxy'] = proxy
        
class RandomUserAgentMiddleware(object):

    def __init__(self):
        self.faker = Faker(local='zh_CN')
        self.user_agent = ''

    @classmethod
    def from_crawler(cls, crawler):
        o = cls()
        crawler.signals.connect(o.spider_opened, signal=signals.spider_opened)
        return o

    def spider_opened(self, spider):
        self.user_agent = getattr(spider, 'user_agent',self.user_agent)

    def process_request(self, request, spider):
        self.user_agent = self.faker.user_agent()       #獲得隨機(jī)user_agent
        request.headers.setdefault(b'User-Agent', self.user_agent)
#settings.py
#...
DOWNLOADER_MIDDLEWARES = {
    'newproject.middlewares.RandomHttpProxyMiddleware': 543,
    'newproject.middlewares.RandomUserAgentMiddleware': 550,
    'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware':None,
    'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware':None,
}
HTTP_PROXY_LIST = [
    'http://193.112.216.55:1234',
    'http://118.24.172.34:1234',
]
#...
#anything.py
# -*- coding: utf-8 -*-
import scrapy
import json
import pprint
class AnythingSpider(scrapy.Spider):
    name = 'anything'
    allowed_domains = ['httpbin.org']
    start_urls = ['http://httpbin.org/anything']

    def parse(self, response):
        ret = json.loads(response.text)
        pprint.pprint(ret)

上面引入了faker庫(kù),該庫(kù)是用來(lái)偽造數(shù)據(jù)的庫(kù),十分方便。我們通過(guò)訪問(wèn)http://httpbin.org/anything來(lái)得到我們的請(qǐng)求內(nèi)容;如下:

#scrapy crawl anything --nolog
{'args': {},
 'data': '',
 'files': {},
 'form': {},
 'headers': {'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
             'Accept-Encoding': 'gzip,deflate',
             'Accept-Language': 'en',
             'Cache-Control': 'max-age=259200',
             'Connection': 'close',
             'Host': 'httpbin.org',
             'User-Agent': 'Opera/8.85.(Windows NT 5.2; sc-IT) Presto/2.9.177 '
                           'Version/10.00'},
 'json': None,
 'method': 'GET',
 'origin': '193.112.216.55',
 'url': 'http://httpbin.org/anything'}
 
 #scrapy crawl anything --nolog
 {'args': {},
 'data': '',
 'files': {},
 'form': {},
 'headers': {'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
             'Accept-Encoding': 'gzip,deflate',
             'Accept-Language': 'en',
             'Cache-Control': 'max-age=259200',
             'Connection': 'close',
             'Host': 'httpbin.org',
             'User-Agent': 'Mozilla/5.0 (Macintosh; PPC Mac OS X 10_12_3) '
                           'AppleWebKit/5342 (KHTML, like Gecko) '
                           'Chrome/40.0.810.0 Safari/5342'},
 'json': None,
 'method': 'GET',
 'origin': '118.24.172.34',
 'url': 'http://httpbin.org/anything'}

可以看到,我們的spider通過(guò)下載中間件,不斷的更改了IPUser-Agent

總結(jié)

本篇講述了什么是下載中間件以及如何自定義和啟用下載中間件,最后實(shí)踐了自定義下載中間件。后面將會(huì)學(xué)習(xí)另一個(gè)中間件:Spider中間件。

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡(jiǎn)書(shū)系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

推薦閱讀更多精彩內(nèi)容