Flume部署及使用

Flume是一個分布式的、高可靠的、高可用的用于高效收集、聚合、移動大量日志數據的框架(Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data.),設計的目標就是高可靠性,擴展性,管理性,使用flume我們可以方便的把日志從源端(webserver等)收集到目的地(比如hdfs、kafka)。

  • 與flume類似的框架包括:

Flume: Cloudera/Apache Java
Scribe: Facebook C/C++ 不再維護
Chukwa: Yahoo/Apache Java 不再維護
Kafka:apache,放在這里不是很合適,主要還是數據緩沖
Fluentd: Ruby
Logstash: ELK(ElasticSearch,Kibana)
需要重點關注的應該是Flume和Logstash,這兩個業界用的比較廣泛

  • 架構及核心組件

Flume工作單元是Agent,每個Agent都包括Source(源端,用于數據收集)、Channel(聚集,用戶數據緩存)、Sink(數據輸出)3個核心組件


flume
  • Flume安裝(版本為1.6.0)
  1. 前置條件
    Java Runtime Environment - Java 1.7 or later(jdk1.7或以上)
    Memory - Sufficient memory for configurations used by sources, channels or sinks(足夠的機器內存)
    Disk Space - Sufficient disk space for configurations used by channels or sinks(足夠的磁盤空間)
    Directory Permissions - Read/Write permissions for directories used by agent(目錄權限,包括讀寫權限)
  2. jdk安裝
    下載 jdk
    解壓到~/app
    將java配置系統環境變量中: ~/.bash_profile
    export JAVA_HOME=/home/hadoop/app/jdk1.8.0_144
    export PATH=$JAVA_HOME/bin:$PATH
    source下讓其配置生效
    檢測: java -version
  3. 安裝Flume
    下載 Flume
    解壓到~/app
    將java配置系統環境變量中: ~/.bash_profile
    export FLUME_HOME=/home/hadoop/app/apache-flume-1.6.0-cdh5.7.0-bin
    export PATH=$FLUME_HOME/bin:$PATH
    source下讓其配置生效
    flume-env.sh的配置:export JAVA_HOME=/home/hadoop/app/jdk1.8.0_144
    檢測: flume-ng version
  • Flume示例1(netcat source + memory channel + logger sink)
  1. 使用Flume的關鍵就是寫配置文件,分別配置Source、Channel、Sink,然后把三者串聯起來
    比如這里寫一個配置文件$FLUME_HOME/conf/example.conf,使用netcat source、memory channel、logger sink,example.conf內容如下:
a1.sources = r1
a1.sinks = k1
a1.channels = c1

a1.sources.r1.type = netcat
a1.sources.r1.bind = hadoop000
a1.sources.r1.port = 44444

a1.sinks.k1.type = logger

a1.channels.c1.type = memory

a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
  1. 啟動Agent:
flume-ng agent \
--name a1  \
--conf $FLUME_HOME/conf  \
--conf-file $FLUME_HOME/conf/example.conf \
-Dflume.root.logger=INFO,console
  1. 啟動telnet輸入數據驗證
    telnet hadoop000 44444啟動后輸入內容123就可以在Flume看到如下數據:
    Event: { headers:{} body: 31 32 33 0D 123. }
    Event是FLume數據傳輸的基本單元
    Event = 可選的header + byte array
  • Flume示例2(exec source + memory channel + logger sink)
  1. 創建exec-memory-logger.conf配置文件
    內容如下:
    a1.sources = r1
    a1.sinks = k1
    a1.channels = c1
    
    a1.sources.r1.type = exec
    a1.sources.r1.command = tail -F /home/hadoop/data/data.log
    a1.sources.r1.shell = /bin/sh -c
    
    a1.sinks.k1.type = logger
    
    a1.channels.c1.type = memory
    
    a1.sources.r1.channels = c1
    a1.sinks.k1.channel = c1
    
  2. 啟動Agent
flume-ng agent \
--name a1  \
--conf $FLUME_HOME/conf  \
--conf-file $FLUME_HOME/conf/exec-memory-logger.conf \
-Dflume.root.logger=INFO,console
  1. 向/home/hadoop/data/data.log日志文件追加數據,驗證
  • Flume示例3(兩個Agent串起來)

對于這種情況:


AgentToAgent

如果webserver在一臺服務器上產生日志,可以在改服務器上使用一個Agent Sink數據到另一臺服務器的Source,然后采用logger sink輸出到控制臺,當然日志輸出到控制臺沒啥用,最終應該輸出到HDFS或者對接到kafka去處理數據,這里只是舉例。
第一個Agent(exec source + memory channel + avro sink)
第二個Agent(avro source + memory channel + logger sink)


A.png
  1. 創建exec-memory-avro.conf和avro-memory-logger.conf配置文件
    因為我手頭沒有兩臺機器,這里我只是在一臺機器(hadoop000)上模擬兩臺機器的情況
    exec-memory-avro.conf:
    exec-memory-avro.sources = exec-source
    exec-memory-avro.sinks = avro-sink
    exec-memory-avro.channels = memory-channel
    
    exec-memory-avro.sources.exec-source.type = exec
    exec-memory-avro.sources.exec-source.command = tail -F /home/hadoop/data/data.log
    exec-memory-avro.sources.exec-source.shell = /bin/sh -c
    
    exec-memory-avro.sinks.avro-sink.type = avro
    exec-memory-avro.sinks.avro-sink.hostname = hadoop000
    exec-memory-avro.sinks.avro-sink.port = 44444
    
    exec-memory-avro.channels.memory-channel.type = memory
    
    exec-memory-avro.sources.exec-source.channels = memory-channel
    exec-memory-avro.sinks.avro-sink.channel = memory-channel
    
    avro-memory-logger.conf
    avro-memory-logger.sources = avro-source
    avro-memory-logger.sinks = logger-sink
    avro-memory-logger.channels = memory-channel
    
    avro-memory-logger.sources.avro-source.type = avro
    avro-memory-logger.sources.avro-source.bind = hadoop000
    avro-memory-logger.sources.avro-source.port = 44444
    
    avro-memory-logger.sinks.logger-sink.type = logger
    
    avro-memory-logger.channels.memory-channel.type = memory
    
    avro-memory-logger.sources.avro-source.channels = memory-channel
    avro-memory-logger.sinks.logger-sink.channel = memory-channel
    
  2. 啟動Agent
    先啟動avro-memory-logger
    flume-ng agent \
    --name avro-memory-logger  \
    --conf $FLUME_HOME/conf  \
    --conf-file $FLUME_HOME/conf/avro-memory-logger.conf \
    -Dflume.root.logger=INFO,console
    
    然后啟動exec-memory-avro
    flume-ng agent \
    --name exec-memory-avro  \
    --conf $FLUME_HOME/conf  \
    --conf-file $FLUME_HOME/conf/exec-memory-avro.conf \
    -Dflume.root.logger=INFO,console
    
  3. 向/home/hadoop/data/data.log日志文件追加數據,驗證
?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。

推薦閱讀更多精彩內容