WebMagic學習(六)之自定義Pipeline(一個簡單的爬蟲)

Pipeline的接口

public interface Pipeline {

    /**
     * Process extracted results.
     * ResultItems保存了抽取結果,它是一個Map結構,在page.putField(key,value)中保存的數據,可以通過ResultItems.get(key)獲取
     * @param resultItems resultItems
     * @param task task
     */
    public void process(ResultItems resultItems, Task task);
}
  • 將結果輸出到控制臺
    ConsolePipeline
public class ConsolePipeline implements Pipeline {

    @Override
    public void process(ResultItems resultItems, Task task) {
        System.out.println("get page: " + resultItems.getRequest().getUrl());
        for (Map.Entry<String, Object> entry : resultItems.getAll().entrySet()) {
            System.out.println(entry.getKey() + ":\t" + entry.getValue());
        }
    }
}
  • 將結果保存到MySQL,實現一個簡易爬蟲
    自定義pileline實現Pipeline接口,實現process方法,在該方法中將數據存入數據庫
package com.sima.crawler;
import com.sima.db.MysqlDBUtils;
import us.codecraft.webmagic.ResultItems;
import us.codecraft.webmagic.Task;
import us.codecraft.webmagic.pipeline.PageModelPipeline;
import us.codecraft.webmagic.pipeline.Pipeline;
/**
 * Created by cfq on 2017/4/30.
 */
public class GankDaoPipeline implements Pipeline {
    @Override
    public void process(ResultItems resultItems, Task task) {
        System.out.println("process");
        GankModel gankModel = new GankModel(resultItems.get("title").toString(), resultItems.get("content").toString());
        //可以存入數據庫
//        System.out.println(gankModel.getTitle());
        System.out.println("插入" + MysqlDBUtils.insert(gankModel) + "條數據!");
    }
}

其中數據庫幫助類代碼如下,采用Druid進行數據庫連接管理。

public class MysqlDBUtils {
    private static Connection getConn() {
        String confile = "druid.properties";//配置文件名稱
        Properties properties = new Properties();
        InputStream inputStream = null;
        DruidDataSource dataSource = null;
        Connection connection = null;
        confile = MysqlDBUtils.class.getResource("/").getPath() + confile;//獲取配置文件路徑
        File file = new File(confile);
        try {
            inputStream = new BufferedInputStream(new FileInputStream(file));
            properties.load(inputStream);//加載配置文件

            //通過DruidDataSourceFactory獲取javax.sql.DataSource
            dataSource = (DruidDataSource) DruidDataSourceFactory.createDataSource(properties);
            connection = dataSource.getConnection();

        } catch (Exception e) {
            e.printStackTrace();
        }
        return connection;
    }

    public static int insert(GankModel gankModel) {
        Connection conn = getConn();
        int i = 0;
        String sql = "insert into gankinfo (title,content) values(?,?)";
        PreparedStatement pstmt;
        try {
            pstmt = (PreparedStatement) conn.prepareStatement(sql);
            pstmt.setString(1, gankModel.getTitle());
            pstmt.setString(2, gankModel.getContent());
            i = pstmt.executeUpdate();
            pstmt.close();
            conn.close();
        } catch (SQLException e) {
            e.printStackTrace();
        }
        return i;
    }
    }
}

druid配置內容如下,druid學習筆記

#基本屬性 url、user、password
url=jdbc:mysql://localhost:3306/istep?useUnicode=true&characterEncoding=utf-8
username=istep
password=istep
#配置初始化大小、最小、最大
initialSize=1
minIdle=1
maxActive=20
#配置獲取連接等待超時的時間
maxWait=60000
#配置間隔多久才進行一次檢測,檢測需要關閉的空閑連接,單位是毫秒
timeBetweenEvictionRunsMillis=60000
#配置一個連接在池中最小生存的時間,單位是毫秒
minEvictableIdleTimeMillis=300000
validationQuery=SELECT 'x'
testWhileIdle=true
testOnBorrow=false
testOnReturn=false
#filters=config
#connectionProperties=config.decrypt=true;config.decrypt.key=MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBAIZcLMcxhrqm+TE10+o2KKI1eoVw1UdtRtBSpKggXkj460nBhO27QdahWZq0MlkwKEKYLyb79TZFdPov8V3pbdsCAwEAAQ==

Model代碼如下:

public class GankModel {

    int id;
    String title;
    String content;

    public GankModel() {
    }

    public GankModel(String title, String content) {
        this.title = title;
        this.content = content;
    }

    public int getId() {
        return id;
    }

    public void setId(int id) {
        this.id = id;
    }

    public String getTitle() {
        return title;
    }

    public void setTitle(String title) {
        this.title = title;
    }

    public String getContent() {
        return content;
    }

    public void setContent(String content) {
        this.content = content;
    }

    @Override
    public String toString() {
        return "GankModel{" +
                "title='" + title + '\'' +
                ", content='" + content + '\'' +
                '}';
    }
}

爬蟲demo,在Processor加入自定義的GankDaoPipeline主流程代碼如下

addPipeline(new GankDaoPipeline())
public class GankRepoPageProcessor implements PageProcessor {
    //抓取網站的相關配置,包括編碼、抓取間隔、重試次數等
    private Site site = Site.me().setRetryTimes(3).setSleepTime(2000);

    // process是定制爬蟲邏輯的核心接口,在這里編寫抽取邏輯
    public void process(Page page) {
        //定義如何抽取頁面信息
        //爬取干貨集中營歷史數據,http://gank.io/2017/04/26
        page.addTargetRequests(page.getHtml().links().regex("(http://gank\\.io/\\d+/\\d+/\\d+)").all());
        page.putField("title", page.getHtml().$("h1").toString());//獲取標題
        page.putField("content", page.getHtml().$("div.outlink").toString());//獲取頁面內容
        if (page.getResultItems().get("title") == null) {
            //跳過沒有數據的頁面
            page.setSkip(true);
        }
    }

    public Site getSite() {
        return site;
    }

    public static void main(String[] args) {
        Spider.create(new GankRepoPageProcessor())
                .addUrl("http://gank.io")//從該url開始
                .addPipeline(new GankDaoPipeline())
                .thread(5)
                .run();
    }
}
最后編輯于
?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。

推薦閱讀更多精彩內容