近期因?yàn)楣ぷ餍枰_始學(xué)習(xí)和寫爬蟲,學(xué)習(xí)到了很多內(nèi)容,就整理了一下發(fā)上來。
需求
這里爬蟲的目的是檢測網(wǎng)站的漏洞,因此希望做成類似于burpSuit的歷史記錄一樣的。
初步需求是簡單地爬取網(wǎng)站的鏈接,去重,嘗試?yán)@過反爬蟲。
更進(jìn)一步的是希望像burpsuit一樣記錄網(wǎng)站的各個連接請求,從而獲取到更全面的信息。
網(wǎng)址爬蟲
簡單的采用urllib之類的也是可以完成爬蟲的,不過為了后續(xù)操作方便,還是學(xué)習(xí)了一下scrapy框架。
關(guān)于框架的內(nèi)容可以移步 http://www.lxweimin.com/p/a8aad3bf4dc4 學(xué)習(xí)一下。
以及scrapy的文檔 http://scrapy-chs.readthedocs.io/zh_CN/0.24/intro/tutorial.html
首先下載安裝scrapy:
sudo pip install scrapy
如果出現(xiàn)錯誤:
'module' object has no attribute 'OP_NO_TLSv1_1'
可以用下面幾個命令
sudo pip install --upgrade scrapy
sudo pip install --upgrade twisted
sudo pip install --upgrade pyopenssl
查看一下scrapy的指令:
Scrapy 1.4.0 - no active project
Usage:
scrapy<command></command>[options] [args]
Available commands:
bench Run quick benchmark test
fetch Fetch a URL using the Scrapy downloader
genspider Generate new spider using pre-defined templates
runspider Run a self-contained spider (without creating a project)
settings Get settings values
shell Interactive scraping console
startproject Create new project
version Print Scrapy version
view Open URL in browser, as seen by Scrapy
[ more ] More commands available when run from project directory
Use "scrapy<command></command>-h" to see more info about a command
這里新建一個項(xiàng)目
scrapy startproject urlspider
New Scrapy project 'urlspider', using template directory '/usr/local/lib/python2.7/dist-packages/scrapy/templates/project', created in:
/home/qiqi/spider/urlspider
You can start your first spider with:
cd urlspider
scrapy genspider example example.com
下面是項(xiàng)目的目錄
ll
total 28
drwxr-sr-x 3 qiqi qiqi 4096 Nov 14 07:10 ./
drwxr-sr-x 3 qiqi qiqi 4096 Nov 14 07:10 ../
-rw-r--r-- 1 qiqi qiqi 0 Nov 14 06:58 __init__.py
-rw-rw-r-- 1 qiqi qiqi 288 Nov 14 07:10 items.py
-rw-rw-r-- 1 qiqi qiqi 1907 Nov 14 07:10 middlewares.py
-rw-rw-r-- 1 qiqi qiqi 289 Nov 14 07:10 pipelines.py
-rw-rw-r-- 1 qiqi qiqi 3158 Nov 14 07:10 settings.py
drwxr-sr-x 2 qiqi qiqi 4096 Nov 14 06:58 spiders/
items是存放數(shù)據(jù)的格式
setting是配置文件
spiders是爬蟲的文件,我們的爬蟲都放在里面
進(jìn)入spiders文件夾創(chuàng)建一個爬蟲文件
# -*- coding: utf-8 -*-
import scrapy
from scrapy.selector import Selector
class UrlSpider(scrapy.Spider):
name = 'url'
allowed_domains = ['opencv.org']
start_urls = ['http://opencv.org/']
def parse(self, response):
se = Selector(response)
site = se.xpath('//a/@href').extract()
print site
啟動爬蟲
scrapy crawl url
這樣就能獲取到一個頁面的連接了,這里用到了scrapy自己的解析工具選擇器(seletors),因?yàn)樗麄兺ㄟ^特定的XPath或者CSS表達(dá)式來“選擇” HTML文件中的某個部分。XPath是一門用來在XML文件中選擇節(jié)點(diǎn)的語言,也可以用在HTML上。CSS是一門將HTML文檔樣式化的語言。選擇器由它定義,并與特定的HTML元素的樣式相關(guān)連。Scrapy選擇器構(gòu)建于lxml庫之上。
上面的可能會遇到兩個標(biāo)簽指向同一個鏈接或者取到相對url,加上去重部分和補(bǔ)全相對url:
# -*- coding: utf-8 -*-
import scrapy
import urlparse
from scrapy.selector import Selector
class UrlSpider(scrapy.Spider):
name='url'
allowed_domains= ['opencv.org']
start_urls= ['http://opencv.org/']
result_urls= []
def parse(self, response):
se=Selector(response)
result=set()
site=se.xpath('//a/@href').extract()
for s in site:
tmpurl=urlparse.urljoin(response.url, s)
if tmpurl not in result:
result.add(tmpurl)
for r in result:
print r
現(xiàn)在可以得到這個頁面上的完整的url信息了
然后需要遞歸爬取頁面
# -*- coding: utf-8 -*-
import scrapy
import urlparse
from scrapy.selector import Selector
class UrlSpider(scrapy.Spider):
name='url'
allowed_domains= ['opencv.org']
start_urls= ['http://opencv.org/']
result_urls= []
def parse(self, response):
print response.url
se=Selector(response)
result=set()
site=se.xpath('//a/@href').extract()
for s in site:
tmpurl=urlparse.urljoin(response.url, s)
if tmpurl not in result:
result.add(tmpurl)
for r in result:
if r not in UrlSpider.result_urls:
UrlSpider.result_urls.append(r)
yield scrapy.Request(url=r,callback=self.parse)
將數(shù)據(jù)存到item中去,編輯items.py
import scrapy
class UrlspiderItem(scrapy.Item):
# define the fields for your item here like:
#name=scrapy.Field()
url=scrapy.Field()
修改一下爬蟲,將數(shù)據(jù)存入item就可以了
# -*- coding: utf-8 -*-
import scrapy
import urlparse
from urlspider.items import UrlspiderItem
from scrapy.selector import Selector
class UrlSpider(scrapy.Spider):
name='url'
allowed_domains= ['opencv.org']
start_urls= ['http://opencv.org/']
result_urls= []
def parse(self, response):
item=UrlspiderItem()
item['url'] = response.url
se=Selector(response)
result=set()
site=se.xpath('//a/@href').extract()
for s in site:
tmpurl=urlparse.urljoin(response.url, s)
if tmpurl not in result:
result.add(tmpurl)
for r in result:
if r not in UrlSpider.result_urls:
UrlSpider.result_urls.append(r)
yield scrapy.Request(url=r,callback=self.parse)
yield item
這樣對于靜態(tài)頁面獲取鏈接就基本上完成了,關(guān)于反爬蟲的在下一篇講吧。