Python 豆瓣電影

  • 1 scrapy startproject doupan


    屏幕快照 2017-11-15 14.59.40.png
    • 1.1進入項目,在里面我們能看到有以下文件


      屏幕快照 2017-11-15 15.01.11.png
  • 2 我們可以用pycharm打開項目,在items.py中定義想要爬取的字段,這里圖省事,我就爬取了標題名
    屏幕快照 2017-11-15 15.03.44.png
  • 3 進入之前1.1中的spiders文件下,創(chuàng)建自己的爬蟲類

    • 3.1 可以手動在目錄下直接創(chuàng)建


      屏幕快照 2017-11-15 15.05.08.png
    • 3.2 可以用終端進入當前目錄用命令創(chuàng)建模板,1和2兩種都可以
      1: scrapy genspider moviedouban “movie.douban.com”
      2: scrapy genspider -t crawl moviedouban movie.douban.com

    • 3.3打開moviedouban文件,在此寫自己的爬蟲

import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from douban.items import DoubanItem

class MoviedoubanSpider(CrawlSpider):
    name = 'doubanmovie'
    allowed_domains = ['movie.douban.com']
    start_urls = ['https://movie.douban.com/top250?start=0']

    pagelink = LinkExtractor(allow=r'start=\d+')


    rules = (
        Rule(pagelink, callback='parse_item', follow=True),
    )

    def parse_item(self, response):

        item = DoubanItem()
        movies = response.xpath("http://div[@class='info']")

        for each in movies:
            # 標題
            item['title'] = each.xpath(".//span[@class='title'][1]/text()").extract()[0]
            yield item
  • 4 這個時候要寫管道文件
import json
import codecs
class DoupanPipeline(object):

    def __init__(self):
        self.filename = codecs.open("donggguan.json", "w", encoding="utf-8")

    def process_item(self, item, spider):
        text = json.dumps(dict(item), ensure_ascii=False) + ",\n"
        self.filename.write(text)
        return item

    def close_spider(self, spider):
        self.filename.close()
  • 5 在寫setting文件
    延遲訪問
    DOWNLOAD_DELAY = 2.5
    禁用COOKIES
    COOKIES_ENABLED = False
    MONGODB 主機名
    MONGODB_HOST = "127.0.0.1"
    MONGODB 端口號
    MONGODB_PORT = 27017
SPIDER_MIDDLEWARES = {
   'douban.middlewares.RandomUserAgent': 100,
   'douban.middlewares.RandomProxy': 200,
}
ITEM_PIPELINES = {
   'doupan.pipelines.DoupanPipeline': 300, # DoupanPipeline自己定義的管道名稱
}

這個是我添加的USER_AGENTS,以便模仿多個用戶登錄使用

USER_AGENTS = [
    'Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0)',
    'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.2)',
    'Opera/9.27 (Windows NT 5.2; U; zh-cn)',
    'Opera/8.0 (Macintosh; PPC Mac OS X; U; en)',
    'Mozilla/5.0 (Macintosh; PPC Mac OS X; U; en) Opera 8.0',
    'Mozilla/5.0 (Linux; U; Android 4.0.3; zh-cn; M032 Build/IML74K) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 Mobile Safari/534.30',
    'Mozilla/5.0 (Windows; U; Windows NT 5.2) AppleWebKit/525.13 (KHTML, like Gecko) Chrome/0.2.149.27 Safari/525.13'
]
PROXIES = [
        {"ip_port" :"121.42.140.113:16816", "user_passwd" : "mr_mao_hacker:sffqry9r"},
        #{"ip_prot" :"121.42.140.113:16816", "user_passwd" : ""}
        #{"ip_prot" :"121.42.140.113:16816", "user_passwd" : ""}
        #{"ip_prot" :"121.42.140.113:16816", "user_passwd" : ""}
]
  • 6 在middleware.py文件中實現(xiàn)方法,把在setting添加的USER_AGENTS追加到請求報文的heads中去
from scrapy import signals

import random
import base64

from douban.settings import USER_AGENTS
from douban.settings import PROXIES

# 隨機的User-Agent
class RandomUserAgent(object):
    def process_request(self, request, spider):
        useragent = random.choice(USER_AGENTS)
        #print useragent
        request.headers.setdefault("User-Agent", useragent)

class RandomProxy(object):
    def process_request(self, request, spider):
        proxy = random.choice(PROXIES)

        if proxy['user_passwd'] is None:
            # 沒有代理賬戶驗證的代理使用方式
            request.meta['proxy'] = "http://" + proxy['ip_port']

        else:
            # 對賬戶密碼進行base64編碼轉(zhuǎn)換
            base64_userpasswd = base64.b64encode(proxy['user_passwd'])
            # 對應到代理服務器的信令格式里
            request.headers['Proxy-Authorization'] = 'Basic ' + base64_userpasswd

            request.meta['proxy'] = "http://" + proxy['ip_port']

class DoubanSpiderMiddleware(object):
    # Not all methods need to be defined. If a method is not defined,
    # scrapy acts as if the spider middleware does not modify the
    # passed objects.

    @classmethod
    def from_crawler(cls, crawler):
............
  • 7 scrapy list 查看當前爬蟲名稱

  • 8 scrapy crawl 爬蟲名

  • 9 JSON數(shù)據(jù)
    {"title": "肖申克的救贖"},
    {"title": "霸王別姬"},
    {"title": "這個殺手不太冷"},
    {"title": "阿甘正傳"},
    {"title": "美麗人生"},
    {"title": "千與千尋"},
    {"title": "辛德勒的名單"},
    {"title": "泰坦尼克號"},
    {"title": "盜夢空間"},
    {"title": "機器人總動員"},
    {"title": "海上鋼琴師"},
    {"title": "三傻大鬧寶萊塢"},
    {"title": "忠犬八公的故事"},
    {"title": "放牛班的春天"},
    {"title": "大話西游之大圣娶親"},
    {"title": "教父"},
    {"title": "龍貓"},
    ......

?著作權歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務。

推薦閱讀更多精彩內(nèi)容