https://github.com/nietaki/crawlie
Crawlie 使用了 Elixir 的 GenStage 功能實現(xiàn)了并行爬蟲。大部分工作通過 Crawlie.Stage.UrlManager 來完成。consume 用戶提供的 url 列表,receive 接下來處理得到的 url,并保證每個 url 只被處理一次,保證 discovered urls 集合盡可能小,通過深度優(yōu)先查找 url 樹。
urls 通過 Crawlie.Stage.UrlManager
獲取,GenStage Flow,然后使用 HTTPoison 并行獲取 urls,并使用用戶提供的回調(diào)函數(shù)處理響應。被發(fā)現(xiàn)的 urls 被發(fā)回 UrlManager。
下面是它的流程圖
如果你對爬蟲的統(tǒng)計數(shù)據(jù)感興趣,可以查看Crawlie.crawl_and_track_stats/3
。它開啟了一個Stats GenServer
在 Crawlie's supervision tree。收集爬蟲過程的數(shù)據(jù)。
這個擴展通過如下方式使用:
Add crawlie to your list of dependencies in mix.exs:
def deps do
[{:crawlie, "~> 0.6.0"}]
end
Ensure crawlie is started before your application:
def application do
[applications: [:crawlie]]
end
看看示例
https://github.com/nietaki/crawlie_example
$ mix deps.get
$ mix crawlie.example
most popular words longer than 5 letters in the vicinity of ["https://en.wikipedia.org/wiki/Elixir_(programming_language)", "https://en.wikipedia.org/wiki/Mainframe_computer"]:
{word, count}
=============
{"system", 1973}
{"computer", 1618}
{"systems", 1257}
{"programming", 1165}
{"language", 1147}
{"software", 1052}
{"operating", 1022}
{"computers", 887}
{"languages", 873}
{"program", 825}
{"memory", 814}
{"number", 798}
{"called", 767}
{"between", 724}
{"company", 693}
{"support", 678}
{"different", 649}
{"including", 623}
{"however,", 620}
{"control", 590}
運行結(jié)果
源代碼主要看這兩個文件:
lib/mix/tasks/crawlie/example.ex and lib/crawlie_example/word_count_logic.ex.