Scrapy Shell
Scrapy終端是一個交互終端,我們可以在未啟動spider的情況下嘗試及調(diào)試代碼
啟動Scrapy Shell
<pre spellcheck="false" class="md-fences md-end-block ty-contain-cm modeLoaded" lang="python" contenteditable="true" cid="n5" mdtype="fences" style="box-sizing: border-box; overflow: visible; font-family: var(--monospace); font-size: 0.9em; display: block; break-inside: avoid; text-align: left; white-space: normal; background-image: inherit; background-position: inherit; background-size: inherit; background-repeat: inherit; background-attachment: inherit; background-origin: inherit; background-clip: inherit; background-color: rgb(248, 248, 248); position: relative !important; border: 1px solid rgb(231, 234, 237); border-radius: 3px; padding: 8px 1em 6px; margin-bottom: 15px; margin-top: 15px; width: inherit; color: rgb(51, 51, 51); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">scrapy shell "https://hr.tencent.com/position.php?&start=0#a"</pre>
Selectors選擇器
<pre spellcheck="false" class="md-fences md-end-block ty-contain-cm modeLoaded" lang="python" contenteditable="true" cid="n7" mdtype="fences" style="box-sizing: border-box; overflow: visible; font-family: var(--monospace); font-size: 0.9em; display: block; break-inside: avoid; text-align: left; white-space: normal; background-image: inherit; background-position: inherit; background-size: inherit; background-repeat: inherit; background-attachment: inherit; background-origin: inherit; background-clip: inherit; background-color: rgb(248, 248, 248); position: relative !important; border: 1px solid rgb(231, 234, 237); border-radius: 3px; padding: 8px 1em 6px; margin-bottom: 15px; margin-top: 15px; width: inherit; color: rgb(51, 51, 51); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">Scrapy Selectors 內(nèi)置 XPath 和 CSS Selector 表達式機制
Selector有四個基本的方法,最常用的還是xpath:
xpath(): 傳入xpath表達式,返回該表達式所對應的所有節(jié)點的selector list列表
extract(): 序列化該節(jié)點為Unicode字符串并返回list, extract_first()
css(): 傳入CSS表達式,返回該表達式所對應的所有節(jié)點的selector list列表,語法同 BeautifulSoup4
re(): 根據(jù)傳入的正則表達式對數(shù)據(jù)進行提取,返回Unicode字符串list列表
使用xpath
response.xpath('//title')</pre>
Spider類
<pre spellcheck="false" class="md-fences md-end-block ty-contain-cm modeLoaded" lang="python" contenteditable="true" cid="n10" mdtype="fences" style="box-sizing: border-box; overflow: visible; font-family: var(--monospace); font-size: 0.9em; display: block; break-inside: avoid; text-align: left; white-space: normal; background-image: inherit; background-position: inherit; background-size: inherit; background-repeat: inherit; background-attachment: inherit; background-origin: inherit; background-clip: inherit; background-color: rgb(248, 248, 248); position: relative !important; border: 1px solid rgb(231, 234, 237); border-radius: 3px; padding: 8px 1em 6px; margin-bottom: 15px; margin-top: 15px; width: inherit; color: rgb(51, 51, 51); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">Spider類定義了如何爬取某個(或某些)網(wǎng)站。包括了爬取的動作(例如:是否跟進鏈接)以及如何從網(wǎng)頁的內(nèi)容中提取結(jié)構(gòu)化數(shù)據(jù)(爬取item)。 換句話說,Spider就是你定義爬取的動作及分析某個網(wǎng)頁(或者是有些網(wǎng)頁)的地方。
?
scrapy.Spider是最基本的類,所有編寫的爬蟲必須繼承這個類。
?
主要用到的函數(shù)及調(diào)用順序為:
init():
初始化爬蟲名字和start_urls列表
start_requests()
調(diào)用make_requests_from_url():生成Requests對象交給Scrapy下載并返回response
parse():
解析response,并返回Item或Requests(需指定回調(diào)函數(shù))。
Item傳給Item pipline持久化,而Requests交由Scrapy下載,并由指定的回調(diào)函數(shù)處理(默認parse()),一直進行循環(huán),直到處理完所有的數(shù)據(jù)為止。</pre>
Spider類源碼參考
<pre spellcheck="false" class="md-fences md-end-block ty-contain-cm modeLoaded" lang="python" contenteditable="true" cid="n12" mdtype="fences" style="box-sizing: border-box; overflow: visible; font-family: var(--monospace); font-size: 0.9em; display: block; break-inside: avoid; text-align: left; white-space: normal; background-image: inherit; background-position: inherit; background-size: inherit; background-repeat: inherit; background-attachment: inherit; background-origin: inherit; background-clip: inherit; background-color: rgb(248, 248, 248); position: relative !important; border: 1px solid rgb(231, 234, 237); border-radius: 3px; padding: 8px 1em 6px; margin-bottom: 15px; margin-top: 15px; width: inherit; color: rgb(51, 51, 51); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">#所有爬蟲的基類,用戶定義的爬蟲必須從這個類繼承
class Spider(object_ref):
定義spider名字的字符串(string)。
spider的名字定義了Scrapy如何定位(并初始化)spider,所以其必須是唯一的。
name是spider最重要的屬性,而且是必須的。
一般做法是以該網(wǎng)站(domain)(加或不加 后綴 )來命名spider。 例如,如果spider爬取 mywebsite.com ,該spider通常會被命名為 mywebsite
name = None
?
初始化,提取爬蟲名字,start_urls
def init(self, name=None, **kwargs):
if name is not None:
self.name = name
如果爬蟲沒有名字,中斷后續(xù)操作則報錯
elif not getattr(self, 'name', None):
raise ValueError("%s must have a name" % type(self).name)
?
python 對象或類型通過內(nèi)置成員dict來存儲成員信息
self.dict.update(kwargs)
?
URL列表。當沒有指定的URL時,spider將從該列表中開始進行爬取。 因此,第一個被獲取到的頁面的URL將是該列表之一。 后續(xù)的URL將會從獲取到的數(shù)據(jù)中提取。
if not hasattr(self, 'start_urls'):
self.start_urls = []
?
打印Scrapy執(zhí)行后的log信息
def log(self, message, level=log.DEBUG, **kw):
log.msg(message, spider=self, level=level, **kw)
?
判斷對象object的屬性是否存在,不存在則斷言處理
def set_crawler(self, crawler):
assert not hasattr(self, '_crawler'), "Spider already bounded to %s" % crawler
self._crawler = crawler
?
@property
def crawler(self):
assert hasattr(self, '_crawler'), "Spider not bounded to any crawler"
return self._crawler
?
@property
def settings(self):
return self.crawler.settings
?
該方法將讀取start_urls內(nèi)的地址,并為每一個地址生成一個Request對象,交給Scrapy下載并返回Response
該方法僅調(diào)用一次
def start_requests(self):
for url in self.start_urls:
yield self.make_requests_from_url(url)
?
start_requests()中調(diào)用,實際生成Request的函數(shù)。
Request對象默認的回調(diào)函數(shù)為parse(),提交的方式為get
def make_requests_from_url(self, url):
return Request(url, dont_filter=True)
?
默認的Request對象回調(diào)函數(shù),處理返回的response。
生成Item或者Request對象。用戶必須實現(xiàn)這個
def parse(self, response):
raise NotImplementedError
?
@classmethod
def handles_request(cls, request):
return url_is_from_spider(request.url, cls)
?
def str(self):
return "<%s %r at 0x%0x>" % (type(self).name, self.name, id(self))
?
repr = str</pre>
主要屬性和方法
-
name
定義spider名字的字符串。唯一
-
allowed_domains
包含了spider允許爬取的域名(domain)的列表,可選。
-
start_urls
初始URL元祖/列表。當沒有制定特定的URL時,spider將從該列表中開始進行爬取。
-
start_requests(self)
該方法必須返回一個可迭代對象(iterable)。該對象包含了spider用于爬取(默認實現(xiàn)是使用 start_urls 的url)的第一個Request。
當spider啟動爬取并且未指定start_urls時,該方法被調(diào)用。
-
parse(self, response)
當請求url返回網(wǎng)頁沒有指定回調(diào)函數(shù)時,默認的Request對象回調(diào)函數(shù)。用來處理網(wǎng)頁返回的response,以及生成Item或者Request對象。
-
log(self, message[, level, component])
使用 scrapy.log.msg() 方法記錄日志信息。
CrawlSpider
CrawlSpider是Spider的派生類,Spider類的設計原則是只爬取start_url列表中的網(wǎng)頁,而CrawlSpider類定義了一些規(guī)則(rule)來提供跟進link的方便的機制,從爬取的網(wǎng)頁中獲取link并繼續(xù)爬取的工作更適合。
CrawlSpider類源碼參考
<pre spellcheck="false" class="md-fences md-end-block ty-contain-cm modeLoaded" lang="python" contenteditable="true" cid="n45" mdtype="fences" style="box-sizing: border-box; overflow: visible; font-family: var(--monospace); font-size: 0.9em; display: block; break-inside: avoid; text-align: left; white-space: normal; background-image: inherit; background-position: inherit; background-size: inherit; background-repeat: inherit; background-attachment: inherit; background-origin: inherit; background-clip: inherit; background-color: rgb(248, 248, 248); position: relative !important; border: 1px solid rgb(231, 234, 237); border-radius: 3px; padding: 8px 1em 6px; margin-bottom: 15px; margin-top: 15px; width: inherit; color: rgb(51, 51, 51); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">class CrawlSpider(Spider):
rules = ()
def init(self, *a, *kw):
super(CrawlSpider, self).init(a, **kw)
self._compile_rules()
?
首先調(diào)用parse()來處理start_urls中返回的response對象
parse()則將這些response對象傳遞給了_parse_response()函數(shù)處理,并設置回調(diào)函數(shù)為parse_start_url()
設置了跟進標志位True
parse將返回item和跟進了的Request對象
def parse(self, response):
return self._parse_response(response, self.parse_start_url, cb_kwargs={}, follow=True)
?
處理start_url中返回的response,需要重寫
def parse_start_url(self, response):
return []
?
def process_results(self, response, results):
return results
?
從response中抽取符合任一用戶定義'規(guī)則'的鏈接,并構(gòu)造成Resquest對象返回
def _requests_to_follow(self, response):
if not isinstance(response, HtmlResponse):
return
seen = set()
抽取之內(nèi)的所有鏈接,只要通過任意一個'規(guī)則',即表示合法
for n, rule in enumerate(self._rules):
links = [l for l in rule.link_extractor.extract_links(response) if l not in seen]
使用用戶指定的process_links處理每個連接
if links and rule.process_links:
links = rule.process_links(links)
將鏈接加入seen集合,為每個鏈接生成Request對象,并設置回調(diào)函數(shù)為_repsonse_downloaded()
for link in links:
seen.add(link)
構(gòu)造Request對象,并將Rule規(guī)則中定義的回調(diào)函數(shù)作為這個Request對象的回調(diào)函數(shù)
r = Request(url=link.url, callback=self._response_downloaded)
r.meta.update(rule=n, link_text=link.text)
對每個Request調(diào)用process_request()函數(shù)。該函數(shù)默認為indentify,即不做任何處理,直接返回該Request.
yield rule.process_request(r)
?
處理通過rule提取出的連接,并返回item以及request
def _response_downloaded(self, response):
rule = self._rules[response.meta['rule']]
return self._parse_response(response, rule.callback, rule.cb_kwargs, rule.follow)
?
解析response對象,會用callback解析處理他,并返回request或Item對象
def _parse_response(self, response, callback, cb_kwargs, follow=True):
首先判斷是否設置了回調(diào)函數(shù)。(該回調(diào)函數(shù)可能是rule中的解析函數(shù),也可能是 parse_start_url函數(shù))
如果設置了回調(diào)函數(shù)(parse_start_url()),那么首先用parse_start_url()處理response對象,
然后再交給process_results處理。返回cb_res的一個列表
if callback:
如果是parse調(diào)用的,則會解析成Request對象
如果是rule callback,則會解析成Item
cb_res = callback(response, **cb_kwargs) or ()
cb_res = self.process_results(response, cb_res)
for requests_or_item in iterate_spider_output(cb_res):
yield requests_or_item
?
如果需要跟進,那么使用定義的Rule規(guī)則提取并返回這些Request對象
if follow and self._follow_links:
返回每個Request對象
for request_or_item in self._requests_to_follow(response):
yield request_or_item
?
def _compile_rules(self):
def get_method(method):
if callable(method):
return method
elif isinstance(method, basestring):
return getattr(self, method, None)
?
self._rules = [copy.copy(r) for r in self.rules]
for rule in self._rules:
rule.callback = get_method(rule.callback)
rule.process_links = get_method(rule.process_links)
rule.process_request = get_method(rule.process_request)
?
def set_crawler(self, crawler):
super(CrawlSpider, self).set_crawler(crawler)
self._follow_links = crawler.settings.getbool('CRAWLSPIDER_FOLLOW_LINKS', True)</pre>
LinkExtractors
<pre spellcheck="false" class="md-fences md-end-block ty-contain-cm modeLoaded" lang="python" contenteditable="true" cid="n47" mdtype="fences" style="box-sizing: border-box; overflow: visible; font-family: var(--monospace); font-size: 0.9em; display: block; break-inside: avoid; text-align: left; white-space: normal; background-image: inherit; background-position: inherit; background-size: inherit; background-repeat: inherit; background-attachment: inherit; background-origin: inherit; background-clip: inherit; background-color: rgb(248, 248, 248); position: relative !important; border: 1px solid rgb(231, 234, 237); border-radius: 3px; padding: 8px 1em 6px; margin-bottom: 15px; margin-top: 15px; width: inherit; color: rgb(51, 51, 51); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">使用LinkExtractors 的目的: 提取鏈接?每個LinkExtractor有唯一的公共方法是 extract_links(),它接收一個 Response 對象,并返回一個 scrapy.link.Link 對象。
scrapy.linkextractors.LinkExtractor(
allow = (),
deny = (),
allow_domains = (),
deny_domains = (),
deny_extensions = None,
restrict_xpaths = (),
tags = ('a','area'),
attrs = ('href'),
canonicalize = True,
unique = True,
process_value = None
)
?
主要參數(shù):
allow:滿足括號中“正則表達式”的值會被提取,如果為空,則全部匹配。
deny:與這個正則表達式(或正則表達式列表)匹配的URL一定不提取。
allow_domains:會被提取的鏈接的domains。
deny_domains:一定不會被提取鏈接的domains。
restrict_xpaths:使用xpath表達式,和allow共同作用過濾鏈接。
?</pre>
rules
<pre spellcheck="false" class="md-fences md-end-block ty-contain-cm modeLoaded" lang="python" contenteditable="true" cid="n49" mdtype="fences" style="box-sizing: border-box; overflow: visible; font-family: var(--monospace); font-size: 0.9em; display: block; break-inside: avoid; text-align: left; white-space: normal; background-image: inherit; background-position: inherit; background-size: inherit; background-repeat: inherit; background-attachment: inherit; background-origin: inherit; background-clip: inherit; background-color: rgb(248, 248, 248); position: relative !important; border: 1px solid rgb(231, 234, 237); border-radius: 3px; padding: 8px 1em 6px; margin-bottom: 15px; margin-top: 15px; width: inherit; color: rgb(51, 51, 51); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">在rules中包含一個或多個Rule對象,每個Rule對爬取網(wǎng)站的動作定義了特定操作。如果多個rule匹配了相同的鏈接,則根據(jù)規(guī)則在本集合中被定義的順序,第一個會被使用。
?
scrapy.spiders.Rule(
link_extractor,
callback = None,
cb_kwargs = None,
follow = None,
process_links = None,
process_request = None
)
?
link_extractor:是一個Link Extractor對象,用于定義需要提取的鏈接。
callback: 從link_extractor中每獲取到鏈接時,參數(shù)所指定的值作為回調(diào)函數(shù),該回調(diào)函數(shù)接受一個response作為其第一個參數(shù)。
注意:當編寫爬蟲規(guī)則時,避免使用parse作為回調(diào)函數(shù)。由于CrawlSpider使用parse方法來實現(xiàn)其邏輯,如果覆蓋了parse方法,crawlspider將會運行失敗。
follow:是一個布爾(boolean)值,指定了根據(jù)該規(guī)則從response提取的鏈接是否需要跟進。 如果callback為None,follow默認設置為True ,否則默認為False。
process_links:指定該spider中哪個的函數(shù)將會被調(diào)用,從link_extractor中獲取到鏈接列表時將會調(diào)用該函數(shù)。該方法主要用來過濾。
process_request:指定該spider中哪個的函數(shù)將會被調(diào)用, 該規(guī)則提取到每個request時都會調(diào)用該函數(shù)。 (用來過濾request)
?</pre>
robots協(xié)議
Robots協(xié)議(也稱為爬蟲協(xié)議、機器人協(xié)議等)的全稱是“網(wǎng)絡爬蟲排除標準”(Robots Exclusion Protocol),網(wǎng)站通過Robots協(xié)議告訴搜索引擎哪些頁面可以抓取,哪些頁面不能抓取。robots.txt文件是一個文本文件。當一個搜索蜘蛛訪問一個站點時,它會首先檢查該站點根目錄下是否存在robots.txt,如果存在,搜索機器人就會按照該文件中的內(nèi)容來確定訪問的范圍;如果該文件不存在,所有的搜索蜘蛛將能夠訪問網(wǎng)站上所有沒有被口令保護的頁面。
<pre spellcheck="false" class="md-fences md-end-block ty-contain-cm modeLoaded" lang="" contenteditable="true" cid="n53" mdtype="fences" style="box-sizing: border-box; overflow: visible; font-family: var(--monospace); font-size: 0.9em; display: block; break-inside: avoid; text-align: left; white-space: normal; background-image: inherit; background-position: inherit; background-size: inherit; background-repeat: inherit; background-attachment: inherit; background-origin: inherit; background-clip: inherit; background-color: rgb(248, 248, 248); position: relative !important; border: 1px solid rgb(231, 234, 237); border-radius: 3px; padding: 8px 1em 6px; margin-bottom: 15px; margin-top: 15px; width: inherit; color: rgb(51, 51, 51); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">User-agent: * 這里的代表的所有的搜索引擎種類,是一個通配符
Disallow: /admin/ 這里定義是禁止爬尋admin目錄下面的目錄
Disallow: /require/ 這里定義是禁止爬尋require目錄下面的目錄
Disallow: /ABC/ 這里定義是禁止爬尋ABC目錄下面的目錄
Disallow: /cgi-bin/.htm 禁止訪問/cgi-bin/目錄下的所有以".htm"為后綴的URL(包含子目錄)。
Disallow: /?* 禁止訪問網(wǎng)站中所有包含問號 (?) 的網(wǎng)址
Disallow: /.jpg 僅允許訪問以".htm"為后綴的URL。
Allow: .gif$ 允許抓取網(wǎng)頁和gif格式圖片
Sitemap: 網(wǎng)站地圖 告訴爬蟲這個頁面是網(wǎng)站地圖</pre>
<pre spellcheck="false" class="md-fences md-end-block ty-contain-cm modeLoaded" lang="" contenteditable="true" cid="n54" mdtype="fences" style="box-sizing: border-box; overflow: visible; font-family: var(--monospace); font-size: 0.9em; display: block; break-inside: avoid; text-align: left; white-space: normal; background-image: inherit; background-position: inherit; background-size: inherit; background-repeat: inherit; background-attachment: inherit; background-origin: inherit; background-clip: inherit; background-color: rgb(248, 248, 248); position: relative !important; border: 1px solid rgb(231, 234, 237); border-radius: 3px; padding: 8px 1em 6px; margin-bottom: 15px; margin-top: 15px; width: inherit; color: rgb(51, 51, 51); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">實例分析:淘寶網(wǎng)的 robots.txt文件
http://www.taobao.com/robots.txt</pre>
禁止robots協(xié)議將 ROBOTSTXT_OBEY = True改為False
Logging
Scrapy提供了log功能,可以通過 logging 模塊使用。
可以修改配置文件settings.py,任意位置添加下面兩行,效果會清爽很多。
<pre spellcheck="false" class="md-fences md-end-block ty-contain-cm modeLoaded" lang="python" contenteditable="true" cid="n61" mdtype="fences" style="box-sizing: border-box; overflow: visible; font-family: var(--monospace); font-size: 0.9em; display: block; break-inside: avoid; text-align: left; white-space: normal; background-image: inherit; background-position: inherit; background-size: inherit; background-repeat: inherit; background-attachment: inherit; background-origin: inherit; background-clip: inherit; background-color: rgb(248, 248, 248); position: relative !important; border: 1px solid rgb(231, 234, 237); border-radius: 3px; padding: 8px 1em 6px; margin-bottom: 15px; margin-top: 15px; width: inherit; color: rgb(51, 51, 51); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">LOG_ENABLED = True # 開啟
LOG_FILE = "TencentSpider.log" #日志文件名
LOG_LEVEL = "INFO" #日志級別</pre>
Log levels
Scrapy提供5層logging級別:
CRITICAL - 嚴重錯誤(critical)
ERROR - 一般錯誤(regular errors)
WARNING - 警告信息(warning messages)
INFO - 一般信息(informational messages)
DEBUG - 調(diào)試信息(debugging messages)
logging設置
通過在setting.py中進行以下設置可以被用來配置logging:
LOG_ENABLED
默認: True,啟用logging
LOG_ENCODING
默認: 'utf-8',logging使用的編碼
LOG_FILE
默認: None,在當前目錄里創(chuàng)建logging輸出文件的文件名
LOG_LEVEL
默認: 'DEBUG',log的最低級別
scrapy的日志模塊已經(jīng)被scrapy棄用,也可以使用python自帶日志模塊
<pre spellcheck="false" class="md-fences md-end-block ty-contain-cm modeLoaded" lang="python" contenteditable="true" cid="n87" mdtype="fences" style="box-sizing: border-box; overflow: visible; font-family: var(--monospace); font-size: 0.9em; display: block; break-inside: avoid; text-align: left; white-space: normal; background-image: inherit; background-position: inherit; background-size: inherit; background-repeat: inherit; background-attachment: inherit; background-origin: inherit; background-clip: inherit; background-color: rgb(248, 248, 248); position: relative !important; border: 1px solid rgb(231, 234, 237); border-radius: 3px; padding: 8px 1em 6px; margin-bottom: 15px; margin-top: 15px; width: inherit; color: rgb(51, 51, 51); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">import logging
?
LOG_FORMAT = "%(asctime)s - %(levelname)s - %(message)s" # 設置輸出格式
DATE_FORMAT = "%Y/%m/%d %H:%M:%S" # 設置時間格式
logging.basicConfig(filename='sina.log', filemode='a+', format=LOG_FORMAT, datefmt=DATE_FORMAT)
?
logging.warning('錯誤')</pre>
settings.py 設置抓取間隔
<pre spellcheck="false" class="md-fences md-end-block ty-contain-cm modeLoaded" lang="python" contenteditable="true" cid="n89" mdtype="fences" style="box-sizing: border-box; overflow: visible; font-family: var(--monospace); font-size: 0.9em; display: block; break-inside: avoid; text-align: left; white-space: normal; background-image: inherit; background-position: inherit; background-size: inherit; background-repeat: inherit; background-attachment: inherit; background-origin: inherit; background-clip: inherit; background-color: rgb(248, 248, 248); position: relative !important; border: 1px solid rgb(231, 234, 237); border-radius: 3px; padding: 8px 1em 6px; margin-bottom: 15px; margin-top: 15px; width: inherit; color: rgb(51, 51, 51); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">DOWNLOAD_DELAY = 0.25 # 設置下載間隔為250ms</pre>
練習:爬取糗事百科翻頁爬取地址: https://www.qiushibaike.com/8hr/page/{}/
練習:爬取騰訊招聘網(wǎng),翻頁獲取崗位名稱,時間,招聘人數(shù),并存入數(shù)據(jù)庫 https://hr.tencent.com/position.php?keywords=&lid=2218&tid=87&start=0#a
Settings配置
<pre spellcheck="false" class="md-fences md-end-block ty-contain-cm modeLoaded" lang="" contenteditable="true" cid="n95" mdtype="fences" style="box-sizing: border-box; overflow: visible; font-family: var(--monospace); font-size: 0.9em; display: block; break-inside: avoid; text-align: left; white-space: normal; background-image: inherit; background-position: inherit; background-size: inherit; background-repeat: inherit; background-attachment: inherit; background-origin: inherit; background-clip: inherit; background-color: rgb(248, 248, 248); position: relative !important; border: 1px solid rgb(231, 234, 237); border-radius: 3px; padding: 8px 1em 6px; margin-bottom: 15px; margin-top: 15px; width: inherit; color: rgb(51, 51, 51); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">Scrapy設置(settings)提供了定制Scrapy組件的方法。可以控制包括核心(core),插件(extension),pipeline及spider組件。</pre>
BOT_NAME
默認: 'scrapybot'
Scrapy項目實現(xiàn)的bot的名字(也為項目名稱)。 這將用來構(gòu)造默認 User-Agent,同時也用來log。
當您使用 startproject
命令創(chuàng)建項目時其也被自動賦值。
CONCURRENT_ITEMS
默認: 100
Item Processor(即 Item Pipeline) 同時處理(每個response的)item的最大值。
CONCURRENT_REQUESTS
默認: 16
Scrapy downloader 并發(fā)請求(concurrent requests)的最大值。
CONCURRENT_REQUESTS_PER_DOMAIN
默認: 8
對單個網(wǎng)站進行并發(fā)請求的最大值。
CONCURRENT_REQUESTS_PER_IP
默認: 0
對單個IP進行并發(fā)請求的最大值。如果非0,則忽略 CONCURRENT_REQUESTS_PER_DOMAIN
設定, 使用該設定。 也就是說,并發(fā)限制將針對IP,而不是網(wǎng)站。
該設定也影響 DOWNLOAD_DELAY
: 如果 CONCURRENT_REQUESTS_PER_IP
非0,下載延遲應用在IP而不是網(wǎng)站上。
DEFAULT_REQUEST_HEADERS
<pre spellcheck="false" class="md-fences md-end-block ty-contain-cm modeLoaded" lang="" contenteditable="true" cid="n114" mdtype="fences" style="box-sizing: border-box; overflow: visible; font-family: var(--monospace); font-size: 0.9em; display: block; break-inside: avoid; text-align: left; white-space: normal; background-image: inherit; background-position: inherit; background-size: inherit; background-repeat: inherit; background-attachment: inherit; background-origin: inherit; background-clip: inherit; background-color: rgb(248, 248, 248); position: relative !important; border: 1px solid rgb(231, 234, 237); border-radius: 3px; padding: 8px 1em 6px; margin-bottom: 15px; margin-top: 15px; width: inherit; color: rgb(51, 51, 51); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">DEFAULT_REQUEST_HEADERS = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8',
'Accept-Language': 'en',
}</pre>
Scrapy HTTP Request使用的默認header。由 DefaultHeadersMiddleware
產(chǎn)生。
DEPTH_LIMIT
默認: 0
爬取網(wǎng)站最大允許的深度(depth)值。如果為0,則沒有限制。
DEPTH_PRIORITY
默認: 0
整數(shù)值。用于根據(jù)深度調(diào)整request優(yōu)先級。
如果為0,則不根據(jù)深度進行優(yōu)先級調(diào)整。
DEPTH_STATS
默認: True
是否收集最大深度數(shù)據(jù)。
DEPTH_STATS_VERBOSE
默認: False
是否收集詳細的深度數(shù)據(jù)。如果啟用,每個深度的請求數(shù)將會被收集在數(shù)據(jù)中。
DNSCACHE_ENABLED
默認: True
是否啟用DNS內(nèi)存緩存(DNS in-memory cache)。
DOWNLOADER
默認: 'scrapy.core.downloader.Downloader'
用于crawl的downloader.
DOWNLOADER_MIDDLEWARES
默認:: {}
保存項目中啟用的下載中間件及其順序的字典。
DOWNLOADER_MIDDLEWARES_BASE
默認:
<pre spellcheck="false" class="md-fences md-end-block ty-contain-cm modeLoaded" lang="" contenteditable="true" cid="n140" mdtype="fences" style="box-sizing: border-box; overflow: visible; font-family: var(--monospace); font-size: 0.9em; display: block; break-inside: avoid; text-align: left; white-space: normal; background-image: inherit; background-position: inherit; background-size: inherit; background-repeat: inherit; background-attachment: inherit; background-origin: inherit; background-clip: inherit; background-color: rgb(248, 248, 248); position: relative !important; border: 1px solid rgb(231, 234, 237); border-radius: 3px; padding: 8px 1em 6px; margin-bottom: 15px; margin-top: 15px; width: inherit; color: rgb(51, 51, 51); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">{
'scrapy.contrib.downloadermiddleware.robotstxt.RobotsTxtMiddleware': 100,
'scrapy.contrib.downloadermiddleware.httpauth.HttpAuthMiddleware': 300,
'scrapy.contrib.downloadermiddleware.downloadtimeout.DownloadTimeoutMiddleware': 350,
'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware': 400,
'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 500,
'scrapy.contrib.downloadermiddleware.defaultheaders.DefaultHeadersMiddleware': 550,
'scrapy.contrib.downloadermiddleware.redirect.MetaRefreshMiddleware': 580,
'scrapy.contrib.downloadermiddleware.httpcompression.HttpCompressionMiddleware': 590,
'scrapy.contrib.downloadermiddleware.redirect.RedirectMiddleware': 600,
'scrapy.contrib.downloadermiddleware.cookies.CookiesMiddleware': 700,
'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 750,
'scrapy.contrib.downloadermiddleware.chunked.ChunkedTransferMiddleware': 830,
'scrapy.contrib.downloadermiddleware.stats.DownloaderStats': 850,
'scrapy.contrib.downloadermiddleware.httpcache.HttpCacheMiddleware': 900,
}
?</pre>
包含Scrapy默認啟用的下載中間件的字典。 永遠不要在項目中修改該設定.
DOWNLOADER_STATS
默認: True
是否收集下載器數(shù)據(jù)。
DOWNLOAD_DELAY
默認: 0
下載器在下載同一個網(wǎng)站下一個頁面前需要等待的時間。該選項可以用來限制爬取速度, 減輕服務器壓力。同時也支持小數(shù):
<pre spellcheck="false" class="md-fences md-end-block ty-contain-cm modeLoaded" lang="" contenteditable="true" cid="n148" mdtype="fences" style="box-sizing: border-box; overflow: visible; font-family: var(--monospace); font-size: 0.9em; display: block; break-inside: avoid; text-align: left; white-space: normal; background-image: inherit; background-position: inherit; background-size: inherit; background-repeat: inherit; background-attachment: inherit; background-origin: inherit; background-clip: inherit; background-color: rgb(248, 248, 248); position: relative !important; border: 1px solid rgb(231, 234, 237); border-radius: 3px; padding: 8px 1em 6px; margin-bottom: 15px; margin-top: 15px; width: inherit; color: rgb(51, 51, 51); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">DOWNLOAD_DELAY = 0.25 # 250 ms of delay</pre>
DOWNLOAD_TIMEOUT
默認: 180
下載器超時時間(單位: 秒)。
EXTENSIONS
默認:: {}
保存項目中啟用的插件及其順序的字典。
EXTENSIONS_BASE
默認:
<pre spellcheck="false" class="md-fences md-end-block ty-contain-cm modeLoaded" lang="" contenteditable="true" cid="n157" mdtype="fences" style="box-sizing: border-box; overflow: visible; font-family: var(--monospace); font-size: 0.9em; display: block; break-inside: avoid; text-align: left; white-space: normal; background-image: inherit; background-position: inherit; background-size: inherit; background-repeat: inherit; background-attachment: inherit; background-origin: inherit; background-clip: inherit; background-color: rgb(248, 248, 248); position: relative !important; border: 1px solid rgb(231, 234, 237); border-radius: 3px; padding: 8px 1em 6px; margin-bottom: 15px; margin-top: 15px; width: inherit; color: rgb(51, 51, 51); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">{
'scrapy.contrib.corestats.CoreStats': 0,
'scrapy.webservice.WebService': 0,
'scrapy.telnet.TelnetConsole': 0,
'scrapy.contrib.memusage.MemoryUsage': 0,
'scrapy.contrib.memdebug.MemoryDebugger': 0,
'scrapy.contrib.closespider.CloseSpider': 0,
'scrapy.contrib.feedexport.FeedExporter': 0,
'scrapy.contrib.logstats.LogStats': 0,
'scrapy.contrib.spiderstate.SpiderState': 0,
'scrapy.contrib.throttle.AutoThrottle': 0,
}
?</pre>
可用的插件列表。需要注意,有些插件需要通過設定來啟用。默認情況下, 該設定包含所有穩(wěn)定(stable)的內(nèi)置插件。
ITEM_PIPELINES
默認: {}
保存項目中啟用的pipeline及其順序的字典。該字典默認為空,值(value)任意。 不過值(value)習慣設定在0-1000范圍內(nèi)。
<pre spellcheck="false" class="md-fences md-end-block ty-contain-cm modeLoaded" lang="" contenteditable="true" cid="n162" mdtype="fences" style="box-sizing: border-box; overflow: visible; font-family: var(--monospace); font-size: 0.9em; display: block; break-inside: avoid; text-align: left; white-space: normal; background-image: inherit; background-position: inherit; background-size: inherit; background-repeat: inherit; background-attachment: inherit; background-origin: inherit; background-clip: inherit; background-color: rgb(248, 248, 248); position: relative !important; border: 1px solid rgb(231, 234, 237); border-radius: 3px; padding: 8px 1em 6px; margin-bottom: 15px; margin-top: 15px; width: inherit; color: rgb(51, 51, 51); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">ITEM_PIPELINES = {
'mybot.pipelines.validate.ValidateMyItem': 300,
'mybot.pipelines.validate.StoreMyItem': 800,
}
?</pre>
ITEM_PIPELINES_BASE
默認: {}
保存項目中默認啟用的pipeline的字典。 永遠不要在項目中修改該設定,而是修改 ITEM_PIPELINES
。
LOG_ENABLED
默認: True
是否啟用logging。
LOG_ENCODING
默認: 'utf-8'
logging使用的編碼。
LOG_FILE
默認: None
logging輸出的文件名。如果為None,則使用標準錯誤輸出(standard error)。
LOG_LEVEL
默認: 'DEBUG'
log的最低級別。可選的級別有: CRITICAL、 ERROR、WARNING、INFO、DEBUG。更多內(nèi)容請查看 Logging 。
LOG_STDOUT
默認: False
如果為 True
,進程所有的標準輸出(及錯誤)將會被重定向到log中。例如, 執(zhí)行 print 'hello'
,其將會在Scrapy log中顯示。
MEMDEBUG_ENABLED
默認: False
是否啟用內(nèi)存調(diào)試(memory debugging)。
MEMDEBUG_NOTIFY
默認: []
如果該設置不為空,當啟用內(nèi)存調(diào)試時將會發(fā)送一份內(nèi)存報告到指定的地址;否則該報告將寫到log中。
樣例:
<pre spellcheck="false" class="md-fences md-end-block ty-contain-cm modeLoaded" lang="" contenteditable="true" cid="n190" mdtype="fences" style="box-sizing: border-box; overflow: visible; font-family: var(--monospace); font-size: 0.9em; display: block; break-inside: avoid; text-align: left; white-space: normal; background-image: inherit; background-position: inherit; background-size: inherit; background-repeat: inherit; background-attachment: inherit; background-origin: inherit; background-clip: inherit; background-color: rgb(248, 248, 248); position: relative !important; border: 1px solid rgb(231, 234, 237); border-radius: 3px; padding: 8px 1em 6px; margin-bottom: 15px; margin-top: 15px; width: inherit; color: rgb(51, 51, 51); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">MEMDEBUG_NOTIFY = ['user@example.com']</pre>
MEMUSAGE_ENABLED
默認: False
Scope: scrapy.contrib.memusage
是否啟用內(nèi)存使用插件。當Scrapy進程占用的內(nèi)存超出限制時,該插件將會關閉Scrapy進程, 同時發(fā)送email進行通知。
MEMUSAGE_LIMIT_MB
默認: 0
Scope: scrapy.contrib.memusage
在關閉Scrapy之前所允許的最大內(nèi)存數(shù)(單位: MB)(如果 MEMUSAGE_ENABLED為True)。 如果為0,將不做限制。
MEMUSAGE_NOTIFY_MAIL
默認: False
Scope: scrapy.contrib.memusage
達到內(nèi)存限制時通知的email列表。
<pre spellcheck="false" class="md-fences md-end-block ty-contain-cm modeLoaded" lang="" contenteditable="true" cid="n203" mdtype="fences" style="box-sizing: border-box; overflow: visible; font-family: var(--monospace); font-size: 0.9em; display: block; break-inside: avoid; text-align: left; white-space: normal; background-image: inherit; background-position: inherit; background-size: inherit; background-repeat: inherit; background-attachment: inherit; background-origin: inherit; background-clip: inherit; background-color: rgb(248, 248, 248); position: relative !important; border: 1px solid rgb(231, 234, 237); border-radius: 3px; padding: 8px 1em 6px; margin-bottom: 15px; margin-top: 15px; width: inherit; color: rgb(51, 51, 51); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">MEMUSAGE_NOTIFY_MAIL = ['user@example.com']</pre>
MEMUSAGE_REPORT
默認: False
Scope: scrapy.contrib.memusage
每個spider被關閉時是否發(fā)送內(nèi)存使用報告。
MEMUSAGE_WARNING_MB
默認: 0
Scope: scrapy.contrib.memusage
在發(fā)送警告email前所允許的最大內(nèi)存數(shù)(單位: MB)(如果 MEMUSAGE_ENABLED為True)。 如果為0,將不發(fā)送警告。
NEWSPIDER_MODULE
默認: ''
使用 genspider
命令創(chuàng)建新spider的模塊。
<pre spellcheck="false" class="md-fences md-end-block ty-contain-cm modeLoaded" lang="" contenteditable="true" cid="n215" mdtype="fences" style="box-sizing: border-box; overflow: visible; font-family: var(--monospace); font-size: 0.9em; display: block; break-inside: avoid; text-align: left; white-space: normal; background-image: inherit; background-position: inherit; background-size: inherit; background-repeat: inherit; background-attachment: inherit; background-origin: inherit; background-clip: inherit; background-color: rgb(248, 248, 248); position: relative !important; border: 1px solid rgb(231, 234, 237); border-radius: 3px; padding: 8px 1em 6px; margin-bottom: 15px; margin-top: 15px; width: inherit; color: rgb(51, 51, 51); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">NEWSPIDER_MODULE = 'mybot.spiders_dev'</pre>
REDIRECT_MAX_TIMES
默認: 20
定義request允許重定向的最大次數(shù)。超過該限制后該request直接返回獲取到的結(jié)果。 對某些任務我們使用Firefox默認值。
REDIRECT_PRIORITY_ADJUST
默認: +2
修改重定向請求相對于原始請求的優(yōu)先級。
ROBOTSTXT_OBEY
默認: False
Scope: scrapy.contrib.downloadermiddleware.robotstxt
如果啟用,Scrapy將會尊重 robots.txt策略。
SCHEDULER
默認: 'scrapy.core.scheduler.Scheduler'
用于爬取的調(diào)度器。
SPIDER_CONTRACTS
默認:: {}
保存項目中啟用用于測試spider的scrapy contract及其順序的字典
SPIDER_CONTRACTS_BASE
默認:
<pre spellcheck="false" class="md-fences md-end-block ty-contain-cm modeLoaded" lang="" contenteditable="true" cid="n234" mdtype="fences" style="box-sizing: border-box; overflow: visible; font-family: var(--monospace); font-size: 0.9em; display: block; break-inside: avoid; text-align: left; white-space: normal; background-image: inherit; background-position: inherit; background-size: inherit; background-repeat: inherit; background-attachment: inherit; background-origin: inherit; background-clip: inherit; background-color: rgb(248, 248, 248); position: relative !important; border: 1px solid rgb(231, 234, 237); border-radius: 3px; padding: 8px 1em 6px; margin-bottom: 15px; margin-top: 15px; width: inherit; color: rgb(51, 51, 51); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">{
'scrapy.contracts.default.UrlContract' : 1,
'scrapy.contracts.default.ReturnsContract': 2,
'scrapy.contracts.default.ScrapesContract': 3,
}
?</pre>
保存項目中默認啟用的scrapy contract的字典。 永遠不要在項目中修改該設定,而是修改SPIDER_CONTRACTS
。
SPIDER_MIDDLEWARES
默認:: {}
保存項目中啟用的下載中間件及其順序的字典。
SPIDER_MIDDLEWARES_BASE
默認:
<pre spellcheck="false" class="md-fences md-end-block ty-contain-cm modeLoaded" lang="" contenteditable="true" cid="n241" mdtype="fences" style="box-sizing: border-box; overflow: visible; font-family: var(--monospace); font-size: 0.9em; display: block; break-inside: avoid; text-align: left; white-space: normal; background-image: inherit; background-position: inherit; background-size: inherit; background-repeat: inherit; background-attachment: inherit; background-origin: inherit; background-clip: inherit; background-color: rgb(248, 248, 248); position: relative !important; border: 1px solid rgb(231, 234, 237); border-radius: 3px; padding: 8px 1em 6px; margin-bottom: 15px; margin-top: 15px; width: inherit; color: rgb(51, 51, 51); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">{
'scrapy.contrib.spidermiddleware.httperror.HttpErrorMiddleware': 50,
'scrapy.contrib.spidermiddleware.offsite.OffsiteMiddleware': 500,
'scrapy.contrib.spidermiddleware.referer.RefererMiddleware': 700,
'scrapy.contrib.spidermiddleware.urllength.UrlLengthMiddleware': 800,
'scrapy.contrib.spidermiddleware.depth.DepthMiddleware': 900,
}
?</pre>
保存項目中默認啟用的spider中間件的字典。 永遠不要在項目中修改該設定,而是修改SPIDER_MIDDLEWARES
。
SPIDER_MODULES
默認: []
Scrapy搜索spider的模塊列表。
<pre spellcheck="false" class="md-fences md-end-block ty-contain-cm modeLoaded" lang="python" contenteditable="true" cid="n246" mdtype="fences" style="box-sizing: border-box; overflow: visible; font-family: var(--monospace); font-size: 0.9em; display: block; break-inside: avoid; text-align: left; white-space: normal; background-image: inherit; background-position: inherit; background-size: inherit; background-repeat: inherit; background-attachment: inherit; background-origin: inherit; background-clip: inherit; background-color: rgb(248, 248, 248); position: relative !important; border: 1px solid rgb(231, 234, 237); border-radius: 3px; padding: 8px 1em 6px; margin-bottom: 15px; margin-top: 15px; width: inherit; color: rgb(51, 51, 51); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">SPIDER_MODULES = ['mybot.spiders_prod', 'mybot.spiders_dev']</pre>
STATS_CLASS
默認: 'scrapy.statscol.MemoryStatsCollector'
收集數(shù)據(jù)的類。該類必須實現(xiàn) 狀態(tài)收集器(Stats Collector) API.
STATS_DUMP
默認: True
當spider結(jié)束時dump Scrapy狀態(tài)數(shù)據(jù) (到Scrapy log中)。
STATSMAILER_RCPTS
默認: []
spider完成爬取后發(fā)送Scrapy數(shù)據(jù)。
TELNETCONSOLE_ENABLED
默認: True
表明 telnet 終端 (及其插件)是否啟用的布爾值。
TELNETCONSOLE_PORT
默認: [6023, 6073]
telnet終端使用的端口范圍。如果設置為 None
或 0
, 則使用動態(tài)分配的端口。
TEMPLATES_DIR
默認: scrapy模塊內(nèi)部的 templates
使用 startproject
命令創(chuàng)建項目時查找模板的目錄。
URLLENGTH_LIMIT
默認: 2083
Scope: contrib.spidermiddleware.urllength
爬取URL的最大長度。
USER_AGENT
默認: "Scrapy/VERSION (+http://scrapy.org)"
爬取的默認User-Agent,除非被覆蓋。
REACTOR_THREADPOOL_MAXSIZE
線程池數(shù)量,默認10條
Scrapy反反爬策略
通常反反爬主要有以下幾個策略:
設置User-Agent(隨機切換User-Agent,模擬不同用戶的瀏覽器信息)
<pre spellcheck="false" class="md-fences md-end-block ty-contain-cm modeLoaded" lang="python" contenteditable="true" cid="n278" mdtype="fences" style="box-sizing: border-box; overflow: visible; font-family: var(--monospace); font-size: 0.9em; display: block; break-inside: avoid; text-align: left; white-space: normal; background-image: inherit; background-position: inherit; background-size: inherit; background-repeat: inherit; background-attachment: inherit; background-origin: inherit; background-clip: inherit; background-color: rgb(248, 248, 248); position: relative !important; border: 1px solid rgb(231, 234, 237); border-radius: 3px; padding: 8px 1em 6px; margin-bottom: 15px; margin-top: 15px; width: inherit; color: rgb(51, 51, 51); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">添加請求頭的多種方式
?
方法1:
修改setting.py中的User-Agent
USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36'
?
方法2:
修改setting中的DEFAULT_REQUEST_HEADERS
DEFAULT_REQUEST_HEADERS = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8',
'Accept-Language': 'en',
'User-Agent':'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36'
}
?
方法3:
在代碼中修改。
class HeadervalidationSpider(scrapy.Spider):
name = 'headervalidation'
allowed_domains = ['helloacm.com']
重寫start_requests
def start_requests(self):
header={'User-Agent':'ua'}
yield scrapy.Request(url='', headers=header, callback=self.parse)
?
def parse(self, response):
pass</pre>
禁用Cookies
<pre spellcheck="false" class="md-fences md-end-block ty-contain-cm modeLoaded" lang="" contenteditable="true" cid="n280" mdtype="fences" style="box-sizing: border-box; overflow: visible; font-family: var(--monospace); font-size: 0.9em; display: block; break-inside: avoid; text-align: left; white-space: normal; background-image: inherit; background-position: inherit; background-size: inherit; background-repeat: inherit; background-attachment: inherit; background-origin: inherit; background-clip: inherit; background-color: rgb(248, 248, 248); position: relative !important; border: 1px solid rgb(231, 234, 237); border-radius: 3px; padding: 8px 1em 6px; margin-bottom: 15px; margin-top: 15px; width: inherit; color: rgb(51, 51, 51); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">也就是不啟用cookies middleware,不向Server發(fā)送cookies,有些網(wǎng)站通過cookie的使用發(fā)現(xiàn)爬蟲行為
可以通過COOKIES_ENABLED控制 CookiesMiddleware 開啟或關閉
除非特殊需要,可以禁用cookies,防止某些網(wǎng)站根據(jù)Cookie來封鎖爬蟲。
COOKIES_ENABLED = False</pre>
設置延遲下載
<pre spellcheck="false" class="md-fences md-end-block ty-contain-cm modeLoaded" lang="python" contenteditable="true" cid="n282" mdtype="fences" style="box-sizing: border-box; overflow: visible; font-family: var(--monospace); font-size: 0.9em; display: block; break-inside: avoid; text-align: left; white-space: normal; background-image: inherit; background-position: inherit; background-size: inherit; background-repeat: inherit; background-attachment: inherit; background-origin: inherit; background-clip: inherit; background-color: rgb(248, 248, 248); position: relative !important; border: 1px solid rgb(231, 234, 237); border-radius: 3px; padding: 8px 1em 6px; margin-bottom: 15px; margin-top: 15px; width: inherit; color: rgb(51, 51, 51); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">防止訪問過于頻繁,設置為 2秒 或更高
DOWNLOAD_DELAY = 3</pre>
使用IP地址池
<pre spellcheck="false" class="md-fences md-end-block ty-contain-cm modeLoaded" lang="" contenteditable="true" cid="n284" mdtype="fences" style="box-sizing: border-box; overflow: visible; font-family: var(--monospace); font-size: 0.9em; display: block; break-inside: avoid; text-align: left; white-space: normal; background-image: inherit; background-position: inherit; background-size: inherit; background-repeat: inherit; background-attachment: inherit; background-origin: inherit; background-clip: inherit; background-color: rgb(248, 248, 248); position: relative !important; border: 1px solid rgb(231, 234, 237); border-radius: 3px; padding: 8px 1em 6px; margin-bottom: 15px; margin-top: 15px; width: inherit; color: rgb(51, 51, 51); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">VPN和代理IP,現(xiàn)在大部分網(wǎng)站都反IP的</pre>
自定義中間件
<pre spellcheck="false" class="md-fences md-end-block ty-contain-cm modeLoaded" lang="" contenteditable="true" cid="n288" mdtype="fences" style="box-sizing: border-box; overflow: visible; font-family: var(--monospace); font-size: 0.9em; display: block; break-inside: avoid; text-align: left; white-space: normal; background-image: inherit; background-position: inherit; background-size: inherit; background-repeat: inherit; background-attachment: inherit; background-origin: inherit; background-clip: inherit; background-color: rgb(248, 248, 248); position: relative !important; border: 1px solid rgb(231, 234, 237); border-radius: 3px; padding: 8px 1em 6px; margin-bottom: 15px; margin-top: 15px; width: inherit; color: rgb(51, 51, 51); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">process_request(self, request, spider)
當每個request通過下載中間件時,該方法被調(diào)用。
?
process_response(self, request, response, spider)
當下載器完成http請求,傳遞響應給引擎的時候調(diào)用</pre>
修改settings.py配置USER_AGENTS和PROXIES
<pre spellcheck="false" class="md-fences md-end-block ty-contain-cm modeLoaded" lang="python" contenteditable="true" cid="n290" mdtype="fences" style="box-sizing: border-box; overflow: visible; font-family: var(--monospace); font-size: 0.9em; display: block; break-inside: avoid; text-align: left; white-space: normal; background-image: inherit; background-position: inherit; background-size: inherit; background-repeat: inherit; background-attachment: inherit; background-origin: inherit; background-clip: inherit; background-color: rgb(248, 248, 248); position: relative !important; border: 1px solid rgb(231, 234, 237); border-radius: 3px; padding: 8px 1em 6px; margin-bottom: 15px; margin-top: 15px; width: inherit; color: rgb(51, 51, 51); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;"># 添加USER_AGENTS:
USER_AGENTS = [
"Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 2.0.50727; Media Center PC 6.0)",
"Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 1.0.3705; .NET CLR 1.1.4322)",
"Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.0.04506.30)",
"Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN) AppleWebKit/523.15 (KHTML, like Gecko, Safari/419.3) Arora/0.3 (Change: 287 c9dfb30)",
"Mozilla/5.0 (X11; U; Linux; en-US) AppleWebKit/527+ (KHTML, like Gecko, Safari/419.3) Arora/0.6",
"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.2pre) Gecko/20070215 K-Ninja/2.1.1",
"Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9) Gecko/20080705 Firefox/3.0 Kapiko/3.0",
"Mozilla/5.0 (X11; Linux i686; U;) Gecko/20070322 Kazehakase/0.4.5"
]
?
添加代理IP設置PROXIES:
免費代理IP可以網(wǎng)上搜索(免費的不太穩(wěn)定),或者付費購買一批可用的私密代理IP:
PROXIES = [
{'ip_port': '111.8.60.9:8123'},
{'ip_port': '101.71.27.120:80'},
{'ip_port': '122.96.59.104:80'},
{'ip_port': '122.224.249.122:8088'},
]</pre>
創(chuàng)建中間件類
<pre spellcheck="false" class="md-fences md-end-block ty-contain-cm modeLoaded" lang="python" contenteditable="true" cid="n292" mdtype="fences" style="box-sizing: border-box; overflow: visible; font-family: var(--monospace); font-size: 0.9em; display: block; break-inside: avoid; text-align: left; white-space: normal; background-image: inherit; background-position: inherit; background-size: inherit; background-repeat: inherit; background-attachment: inherit; background-origin: inherit; background-clip: inherit; background-color: rgb(248, 248, 248); position: relative !important; border: 1px solid rgb(231, 234, 237); border-radius: 3px; padding: 8px 1em 6px; margin-bottom: 15px; margin-top: 15px; width: inherit; color: rgb(51, 51, 51); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;"># -- coding: utf-8 --
import random
from settings import USER_AGENTS
from settings import PROXIES
?
隨機的User-Agent
class RandomUserAgent(object):
def process_request(self, request, spider):
useragent = random.choice(USER_AGENTS)
request.headers.setdefault("User-Agent", useragent)
?
隨機代理IP
class RandomProxy(object):
def process_request(self, request, spider):
proxy = random.choice(PROXIES)
request.meta['proxy'] = "http://" + proxy['ip_port']</pre>
配置中間件
<pre spellcheck="false" class="md-fences md-end-block ty-contain-cm" lang="python" contenteditable="true" cid="n294" mdtype="fences" style="box-sizing: border-box; overflow: visible; font-family: var(--monospace); font-size: 0.9em; display: block; break-inside: avoid; text-align: left; white-space: normal; background-image: inherit; background-position: inherit; background-size: inherit; background-repeat: inherit; background-attachment: inherit; background-origin: inherit; background-clip: inherit; background-color: rgb(248, 248, 248); position: relative !important; border: 1px solid rgb(231, 234, 237); border-radius: 3px; padding: 8px 1em 6px; margin-bottom: 15px; margin-top: 15px; width: inherit; color: rgb(51, 51, 51); font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;"># 最后設置setting.py里的DOWNLOADER_MIDDLEWARES,添加自己編寫的下載中間件類
DOWNLOADER_MIDDLEWARES = {
'mySpider.middlewares.MyCustomDownloaderMiddleware': 543,
'mySpider.middlewares.RandomUserAgent': 81,
'mySpider.middlewares.ProxyMiddleware': 100
}</pre>