官網地址:https://www.elastic.co/cn/
官網權威指南:https://www.elastic.co/guide/cn/elasticsearch/guide/current/index.html
安裝指南:https://www.elastic.co/guide/en/elasticsearch/reference/5.x/rpm.html
ELK是Elasticsearch、Logstash、Kibana的簡稱,這三者是核心套件,但并非全部。
Elasticsearch是實時全文搜索和分析引擎,提供搜集、分析、存儲數據三大功能;是一套開放REST和JAVA API等結構提供高效搜索功能,可擴展的分布式系統。它構建于Apache Lucene搜索引擎庫之上。
Logstash是一個用來搜集、分析、過濾日志的工具。它支持幾乎任何類型的日志,包括系統日志、錯誤日志和自定義應用程序日志。它可以從許多來源接收日志,這些來源包括syslog、消息傳遞(例如RabbitMQ)和JMX,它能夠以多種方式輸出數據,包括電子郵件、websockets和Elasticsearch。
一、java環境,安裝jdk 1.8及以上的版本
[root@elk-node1 ~]# yum install -y java-1.8.0
[root@elk-node1 ~]# java -version
openjdk version "1.8.0_121"
OpenJDK Runtime Environment (build 1.8.0_121-b13)
OpenJDK 64-Bit Server VM (build 25.121-b13, mixed mode)
?? 二、elasticsearch安裝
1、下載Elasticsearch最新版本
解壓至指定目錄(安裝目錄)/usr/local/下,并將其重命名為elasticsearch (完整的目錄應該為/usr/local/elasticsearch)
# tar xzf elasticsearch-6.4.2.tar.gz? -C /usr/local
# cd /usr/local/
# mv elasticsearch-6.4.2/ elasticsearch
2、創建一個用于運行elasticsearch的普通用戶,隨后修改elasticsearch家目錄的所屬權限為該用戶;創建elasticsearch數據存儲目錄/data/elasticsearch
# groupadd elasticsearch
# useradd -g elasticsearch elasticsearch -m
useradd??elasticsearch(用戶名) -g?elasticsearch(組名) -p?elasticsearch(密碼)
# useradd elasticsearch -g elasticsearch -p elasticsearch
更改Elasticsearch文件夾以及內部文件的所屬用戶以及組為elasticsearch
# chown -R elasticsearch:elasticsearch elasticsearch
# mkdir /data/elasticsearch
# chown -R elasticsearch. /data/elasticsearch
3、配置環境變量
# vim /etc/profile
ulimit -u 4096
# source /etc/profile
4、修改elasticsearch.yml配置文件
# vim config/elasticsearch.yml
cluster.name: my-application ? ? #ELK集群名稱
path.data: /data/elasticsearch??? #elasticsearch 數據存儲目錄
path.logs: /usr/local/elasticsearch/logs?????? #elasticsearch 日志存儲路徑
network.host: 10.66.1.23?????? #elasticsearch 監聽地址,默認為localhost
http.port: 9200????????? #elasticsearch 監聽端口,默認問9200
5、修改相關的內核參數
[root@localhostlocal]# vim /etc/security/limits.conf
添加以下內容
* soft nproc 65536
* hard nproc 65536
* soft nofile 65536
* hard nofile 65536
#修改vm.max_map_count=262144
# echo "vm.max_map_count=262144" >> /etc/sysctl.conf
6、以下由elasticsearch用戶操作,以elasticsearch用戶登錄服務器,運行elasticsearch (注意:要切換到普通用戶運行)
# su - elasticsearch
運行elasticsearch
$ ./elasticsearch/bin//elasticsearch或
$cd /usr/local/elasticsearch/
$ ./bin/elasticsearch
一般情況我們要求elasticsearch在后臺運行,使用命令如下:
$ ./bin/elasticsearch -d
7、檢查elasticsearch狀態,如下則表示正常運行
# curl http://10.66.1.23:9200
{
? "name" : "dlOHzTB",
? "cluster_name" : "elasticsearch",
? "cluster_uuid" : "WuNxFom3QUWZLqC61-FSCw",
? "version" : {
? ? "number" : "6.4.2",
? ? "build_flavor" : "default",
? ? "build_type" : "tar",
? ? "build_hash" : "04711c2",
? ? "build_date" : "2018-09-26T13:34:09.098244Z",
? ? "build_snapshot" : false,
? ? "lucene_version" : "7.4.0",
? ? "minimum_wire_compatibility_version" : "5.6.0",
? ? "minimum_index_compatibility_version" : "5.0.0"
? },
? "tagline" : "You Know, for Search"
}
跳轉到Elasticsearch的config配置文件下,使用vim打開elasticsearch.yml,找到里面的"network.host",將其改為你的IP,保存。
#cd?elasticsearch/config/
#vim?elasticsearch.yml
重啟ElasticSearch,然后使用http://10.66.1.123:9200/訪問,訪問結果所示:,則說明ElasticSearch安裝成功。如果是使用命令./bin/elasticsearch來啟動的Elasticsearch,如果想要停止Elasticsearch的執行,則直接按住鍵盤Ctrl+C則會停止,停止之后你在瀏覽器中再次測試發現已不能操作。
http://10.66.1.23:9200/
name"dlOHzTB"
cluster_name"elasticsearch"
cluster_uuid"WuNxFom3QUWZLqC61-FSCw"
version
number"6.4.2"
build_flavor"default"
build_type"tar"
build_hash"04711c2"
build_date"2018-09-26T13:34:09.098244Z"
build_snapshotfalse
lucene_version"7.4.0"
minimum_wire_compatibility_version"5.6.0"
minimum_index_compatibility_version"5.0.0"
tagline"You Know, for Search"
三、Logstash安裝
1、下載logstash軟件包
# wget https://artifacts.elastic.co/downloads/logstash/logstash-6.4.2.tar.gz
2、解壓至指定安裝目錄
# tar -zxf logstash-6.4.2.tar.gz -C /usr/local
# cd /usr/local/
# mv logstash-5.5.2 logstash
3、運行logstash
# cd logstash/
# ./bin/logstash -e 'input { stdin { } } output { stdout {} }'
輸入”hello world! ?”,驗證是否正常輸出
[root@elk-server logstash]# ./bin/logstash -e 'input { stdin { } } output { stdout {} }'ERROR StatusLogger No log4j2 configurationfilefound. Using default configuration: logging only errors to the console.Sending Logstash's logs to /usr/local/logstash/logswhichis now configured via log4j2.properties[2017-08-28T15:11:33,267][INFO][logstash.setting.writabledirectory]Creating directory{:setting=>"path.queue", :path=>"/usr/local/logstash/data/queue"}[2017-08-28T15:11:33,273][INFO][logstash.setting.writabledirectory]Creating directory{:setting=>"path.dead_letter_queue", :path=>"/usr/local/logstash/data/dead_letter_queue"}[2017-08-28T15:11:33,300][INFO][logstash.agent]No persistent UUIDfilefound. Generating new UUID{:uuid=>"2fb479ab-0ca5-4979-89b1-4246df9a7472", :path=>"/usr/local/logstash/data/uuid"}[2017-08-28T15:11:33,438][INFO][logstash.pipeline]Starting pipeline{"id"=>"main","pipeline.workers"=>8,"pipeline.batch.size"=>125,"pipeline.batch.delay"=>5,"pipeline.max_inflight"=>1000}[2017-08-28T15:11:33,455][INFO][logstash.pipeline]Pipeline main startedThe stdin plugin is now waitingforinput:[2017-08-28T15:11:33,497][INFO][logstash.agent]Successfully started Logstash API endpoint{:port=>9600}hello world!2017-08-28T07:11:42.724Z elk-server.huangming.org hello world!
四、Kibana安裝
1、下載kibana
# wget https://artifacts.elastic.co/downloads/kibana/kibana-5.5.2-linux-x86_64.tar.gz
2、解壓至安裝目錄
# tar -zxf kibana-6.4.2-linux-x86_64.tar.gz -C /usr/local
# cd /usr/local/
# mv kibana-6.4.2-linux-x86_64/ kibana
3、修改配置
# cd kibana/
# vim config/kibana.yml
server.port: 5601?? # 監聽端口
server.host:"10.66.1.23"??? # 指定后端服務器
elasticsearch.url:"http://10.66.1.23:9200"????? # 指定elasticsearch實例地址
4、運行kibana
# ./bin/kibana &
# ./bin/kibana &[1]3219
[root@Anwar01 kibana]# ./bin/kibana
? log? [06:48:27.127] [info][status][plugin:kibana@6.4.2] Status changed from uninitialized to green - Ready
? log? [06:48:27.222] [info][status][plugin:elasticsearch@6.4.2] Status changed from uninitialized to yellow - Waiting for Elasticsearch
? log? [06:48:27.227] [info][status][plugin:xpack_main@6.4.2] Status changed from uninitialized to yellow - Waiting for Elasticsearch
? log? [06:48:27.234] [info][status][plugin:searchprofiler@6.4.2] Status changed from uninitialized to yellow - Waiting for Elasticsearch
? log? [06:48:27.240] [info][status][plugin:ml@6.4.2] Status changed from uninitialized to yellow - Waiting for Elasticsearch
? log? [06:48:27.329] [info][status][plugin:tilemap@6.4.2] Status changed from uninitialized to yellow - Waiting for Elasticsearch
? log? [06:48:27.332] [info][status][plugin:watcher@6.4.2] Status changed from uninitialized to yellow - Waiting for Elasticsearch
? log? [06:48:27.352] [info][status][plugin:license_management@6.4.2] Status changed from uninitialized to green - Ready
? log? [06:48:27.355] [info][status][plugin:index_management@6.4.2] Status changed from uninitialized to yellow - Waiting for Elasticsearch
? log? [06:48:28.747] [info][status][plugin:timelion@6.4.2] Status changed from uninitialized to green - Ready
? log? [06:48:28.751] [info][status][plugin:graph@6.4.2] Status changed from uninitialized to yellow - Waiting for Elasticsearch
? log? [06:48:28.843] [info][status][plugin:monitoring@6.4.2] Status changed from uninitialized to green - Ready
? log? [06:48:28.849] [warning][security] Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in kibana.yml
? log? [06:48:28.855] [warning][security] Session cookies will be transmitted over insecure connections. This is not recommended.
? log? [06:48:28.871] [info][status][plugin:security@6.4.2] Status changed from uninitialized to yellow - Waiting for Elasticsearch
? log? [06:48:28.913] [info][status][plugin:grokdebugger@6.4.2] Status changed from uninitialized to yellow - Waiting for Elasticsearch
? log? [06:48:28.924] [info][status][plugin:dashboard_mode@6.4.2] Status changed from uninitialized to green - Ready
? log? [06:48:28.928] [info][status][plugin:logstash@6.4.2] Status changed from uninitialized to yellow - Waiting for Elasticsearch
? log? [06:48:28.965] [info][status][plugin:apm@6.4.2] Status changed from uninitialized to green - Ready
? log? [06:48:29.028] [info][status][plugin:console@6.4.2] Status changed from uninitialized to green - Ready
? log? [06:48:29.031] [info][status][plugin:console_extensions@6.4.2] Status changed from uninitialized to green - Ready
? log? [06:48:29.037] [info][status][plugin:notifications@6.4.2] Status changed from uninitialized to green - Ready
? log? [06:48:29.042] [info][status][plugin:metrics@6.4.2] Status changed from uninitialized to green - Ready
? log? [06:48:45.846] [warning][reporting] Generating a random key for xpack.reporting.encryptionKey. To prevent pending reports from failing on restart, please set xpack.reporting.encryptionKey in kibana.yml
? log? [06:48:45.850] [info][status][plugin:reporting@6.4.2] Status changed from uninitialized to yellow - Waiting for Elasticsearch
? log? [06:48:45.868] [error][status][plugin:xpack_main@6.4.2] Status changed from yellow to red - Request Timeout after 3000ms
? log? [06:48:45.869] [error][status][plugin:searchprofiler@6.4.2] Status changed from yellow to red - Request Timeout after 3000ms
? log? [06:48:45.870] [error][status][plugin:ml@6.4.2] Status changed from yellow to red - Request Timeout after 3000ms
? log? [06:48:45.871] [error][status][plugin:tilemap@6.4.2] Status changed from yellow to red - Request Timeout after 3000ms
? log? [06:48:45.871] [error][status][plugin:watcher@6.4.2] Status changed from yellow to red - Request Timeout after 3000ms
? log? [06:48:45.872] [error][status][plugin:index_management@6.4.2] Status changed from yellow to red - Request Timeout after 3000ms
? log? [06:48:45.873] [error][status][plugin:graph@6.4.2] Status changed from yellow to red - Request Timeout after 3000ms
? log? [06:48:45.874] [error][status][plugin:security@6.4.2] Status changed from yellow to red - Request Timeout after 3000ms
? log? [06:48:45.875] [error][status][plugin:grokdebugger@6.4.2] Status changed from yellow to red - Request Timeout after 3000ms
? log? [06:48:45.875] [error][status][plugin:logstash@6.4.2] Status changed from yellow to red - Request Timeout after 3000ms
? log? [06:48:45.876] [error][status][plugin:reporting@6.4.2] Status changed from yellow to red - Request Timeout after 3000ms
? log? [06:48:45.877] [error][status][plugin:elasticsearch@6.4.2] Status changed from yellow to red - Request Timeout after 3000ms
? log? [06:48:47.418] [info][license][xpack] Imported license information from Elasticsearch for the [data] cluster: mode: basic | status: active
? log? [06:48:47.423] [info][status][plugin:xpack_main@6.4.2] Status changed from red to green - Ready
? log? [06:48:47.424] [info][status][plugin:searchprofiler@6.4.2] Status changed from red to green - Ready
? log? [06:48:47.425] [info][status][plugin:ml@6.4.2] Status changed from red to green - Ready
? log? [06:48:47.425] [info][status][plugin:tilemap@6.4.2] Status changed from red to green - Ready
? log? [06:48:47.426] [info][status][plugin:watcher@6.4.2] Status changed from red to green - Ready
? log? [06:48:47.427] [info][status][plugin:index_management@6.4.2] Status changed from red to green - Ready
? log? [06:48:47.428] [info][status][plugin:graph@6.4.2] Status changed from red to green - Ready
? log? [06:48:47.429] [info][status][plugin:grokdebugger@6.4.2] Status changed from red to green - Ready
? log? [06:48:47.430] [info][status][plugin:logstash@6.4.2] Status changed from red to green - Ready
? log? [06:48:47.430] [info][status][plugin:reporting@6.4.2] Status changed from red to green - Ready
? log? [06:48:47.431] [info][kibana-monitoring][monitoring-ui] Starting monitoring stats collection
? log? [06:48:47.436] [info][status][plugin:security@6.4.2] Status changed from red to green - Ready
? log? [06:48:47.564] [info][license][xpack] Imported license information from Elasticsearch for the [monitoring] cluster: mode: basic | status: active
? log? [06:48:50.527] [info][status][plugin:elasticsearch@6.4.2] Status changed from red to green - Ready
? log? [06:49:07.536] [info][listening][server][http] Server running at http://10.66.1.23:5601
5、驗證kibana
在客戶端瀏覽器打開http://10.66.1.23:5601
在該頁面提示我們需要創建一個index
首先創建一個kinana默認的index(名稱為.kibana),如果輸入的index名不存在,則無法創建
查看運行狀態及已安裝的插件
至此ELK已經搭建完成了,下面來創建一個收集message系統日志的實例
五、收集syslog日志
1、創建配置文件
# cd logstash/
# vim config/logstash.conf
input{file{path=>["/var/log/messages"]type=>"syslog"}}filter{grok{match=>["message","%{SYSLOGBASE} %{GREEDYDATA:content}"]}}output{elasticsearch{hosts=>["10.17.83.42:9200"]index=>"syslog-%{+YYY.MM.dd}"}stdout{codec=>rubydebug}}
其中match => [ "message", "%{SYSLOGBASE} %{GREEDYDATA:content}" ]這一行也可以具體寫成如下:
match => [ "message", "%{SYSLOGTIMESTAMP:timestamp} (?:%{SYSLOGFACILITY} )?%{SYSLOGHOST:logsource} %{SYSLOGPROG}: %{GREEDYDATA:content}" ]
參考官方文檔logstash配置:
https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html
2、指定配置文件運行logstash
# ./bin/logstash -f ./config/logstash.conf &
[root@elk-server logstash]# ./bin/logstash -f ./config/logstash.conf &[1]4479[root@elk-server logstash]# ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.Sending Logstash's logs to /usr/local/logstash/logswhichis now configured via log4j2.properties[2017-08-28T16:29:05,950][INFO][logstash.outputs.elasticsearch]Elasticsearch pool URLs updated{:changes=>{:removed=>[], :added=>[http://10.17.83.42:9200/]}}[2017-08-28T16:29:05,955][INFO][logstash.outputs.elasticsearch]Running health check to seeifan Elasticsearch connection is working{:healthcheck_url=>http://10.17.83.42:9200/, :path=>"/"}[2017-08-28T16:29:06,059][WARN][logstash.outputs.elasticsearch]Restored connection to ES instance{:url=>"http://10.17.83.42:9200/"}[2017-08-28T16:29:06,060][INFO][logstash.outputs.elasticsearch]Using mapping template from{:path=>nil}[2017-08-28T16:29:06,110][INFO][logstash.outputs.elasticsearch]Attempting toinstalltemplate{:manage_template=>{"template"=>"logstash-*","version"=>50001,"settings"=>{"index.refresh_interval"=>"5s"},"mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true,"norms"=>false},"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message","match_mapping_type"=>"string","mapping"=>{"type"=>"text","norms"=>false}}},{"string_fields"=>{"match"=>"*","match_mapping_type"=>"string","mapping"=>{"type"=>"text","norms"=>false,"fields"=>{"keyword"=>{"type"=>"keyword","ignore_above"=>256}}}}}],"properties"=>{"@timestamp"=>{"type"=>"date","include_in_all"=>false},"@version"=>{"type"=>"keyword","include_in_all"=>false},"geoip"=>{"dynamic"=>true,"properties"=>{"ip"=>{"type"=>"ip"},"location"=>{"type"=>"geo_point"},"latitude"=>{"type"=>"half_float"},"longitude"=>{"type"=>"half_float"}}}}}}}}[2017-08-28T16:29:06,117][INFO][logstash.outputs.elasticsearch]New Elasticsearch output{:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://10.17.83.42:9200"]}[2017-08-28T16:29:06,187][INFO][logstash.pipeline]Starting pipeline{"id"=>"main","pipeline.workers"=>8,"pipeline.batch.size"=>125,"pipeline.batch.delay"=>5,"pipeline.max_inflight"=>1000}[2017-08-28T16:29:06,343][INFO][logstash.pipeline]Pipeline main started[2017-08-28T16:29:06,380][INFO][logstash.agent]Successfully started Logstash API endpoint{:port=>9600}
3、登錄kibana頁面
點擊Management -->?Index Patterns --> Create index
在Index name or pattern處輸入在logstash中指定的index,后面的日期直接用*號代替即可
創建完成之后進入Discover頁面
4、驗證是否正常收集syslog日志,執行以下命令手動生成日志
[root@elk-server kibana]# logger "helloooooooo22"[root@elk-server kibana]# yum install httpd
查看源日志文件,最近生成的的日志
[root@elk-server ~]# tail -f -n 10 /var/log/messagesAug 28 16:46:15 elk-server root: helloooooooo33Aug 28 16:47:17 elk-server yum[5048]: Installed: apr-1.4.8-3.el7.x86_64Aug 28 16:47:17 elk-server yum[5048]: Installed: apr-util-1.5.2-6.el7.x86_64Aug 28 16:47:18 elk-server yum[5048]: Installed: httpd-tools-2.4.6-45.el7.centos.4.x86_64Aug 28 16:47:18 elk-server yum[5048]: Installed: mailcap-2.1.41-2.el7.noarchAug 28 16:47:19 elk-server systemd: Reloading.Aug 28 16:47:19 elk-server systemd: Configurationfile/usr/lib/systemd/system/auditd.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.Aug 28 16:47:19 elk-server systemd: Configurationfile/usr/lib/systemd/system/ebtables.service is marked executable. Please remove executable permission bits. Proceeding anyway.Aug 28 16:47:19 elk-server systemd: Configurationfile/usr/lib/systemd/system/wpa_supplicant.service is marked executable. Please remove executable permission bits. Proceeding anyway.Aug 28 16:47:19 elk-server yum[5048]: Installed: httpd-2.4.6-45.el7.centos.4.x86_64
5、回到Kibana管理頁面,重新刷新頁面,可以看到新的日志已經展示出來了
展開最近的一條日志
[root@elk-node1 logstash]# grep -v "#" logstash.yml
path.data: /var/lib/logstash
path.config: /etc/logstash/conf.d
path.logs: /var/log/logstash
?? 2.pipeline文件
根據默認配置,pipeline實例文件默認應放置于/etc/logstash/conf.d目錄,此時目錄下無實例文件,可根據實際情況新建實例,以處理本機messages信息為例,如下:
[root@elk-node1 logstash]# cd /etc/logstash/conf.d/
[root@elk-node1 conf.d]# vim messages.conf
input {
? ? file {
? ? ? ? path => "/var/log/messages"
? ? }
}
output {
? ? elasticsearch {
? ? ? ? hosts => ["192.168.217.131:9200","192.168.217.132:9200"]
? ? ? ? index => "messages-%{+YYYY.MM.dd}"
? ? }
? ? stdout {
# codec => rubydebug
? ? }
}
[root@elk-node1 conf.d]# cd ..
[root@elk-node1 logstash]# chown -R logstash:logstash conf.d/
[root@elk-node1 logstash]# chmod 644 /var/log/messages
?? 3.啟動驗證
1)啟動測試
? ? 1
[root@elk-node1 logstash]# cd /usr/share/logstash/
[root@elk-node1 logstash]# bin/logstash -e 'input { stdin { } } output { stdout {} }'
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs to console
logstash可以啟動,但此種驗證方式會有告警,可以提示方式處理,在“$LS_HOME”下建立“config”目錄,并將”/etc/logstash/”下的文件建軟鏈接到“config”目錄,再次執行即可,如下:
[root@elk-node1 logstash]# mkdir -p /usr/share/logstash/config/
[root@elk-node1 logstash]#? ln -s /etc/logstash/* /usr/share/logstash/config
[root@elk-node1 logstash]# chown -R logstash:logstash /usr/share/logstash/config/
[root@elk-node1 logstash]# bin/logstash -e 'input { stdin { } } output { stdout {} }'
?? 2)啟動logstash并驗證
?[root@elk-node1 ~]#
systemctl start logstash
[root@elk-node1 ~]# systemctl enable logstash.service
Created symlink from /etc/systemd/system/multi-user.target.wants/logstash.service to /etc/systemd/system/logstash.service.
[root@elk-node1 ~]# netstat -anplut
//查看9600端口
? 4.驗證
訪問http://192.168.217.131:9100/
這里寫圖片描述
4.安裝kibana
1.yum安裝kibana
[root@elk-node1 ~]# yum install -y kinaba
[root@elk-node1 ~]# cd /etc/kibana
[root@elk-node1 kibana]# grep -v ^# /etc/kibana/kibana.yml
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.url: "http://192.168.217.131:9200"
[root@elk-node1 kibana]# systemctl start kibana
[root@elk-node1 kibana]# netstat -anplut |grep :5601
tcp? ? ? ? 0? ? ? 0 0.0.0.0:5601? ? ? ? ? ? 0.0.0.0:*? ? ? ? ? ? ? LISTEN? ? ? 4335/node
? 2.瀏覽器訪問驗證
訪問http://192.168.217.131:5601/
這里寫圖片描述
在紅線部分添加messages-*,然后點擊Discover,如果不能顯示圖形,請把時間選擇一小時或更長。
這里寫圖片描述
elk,就簡單介紹到這里,有什么問題歡迎提出來。
安裝elasticsearch的yum源的密鑰(這個需要在所有服務器上都配置)
# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
配置elasticsearch的yum源
# vim /etc/yum.repos.d/elasticsearch.repo
在elasticsearch.repo文件中添加如下內容
[elasticsearch-5.x]name=Elasticsearch repository for 5.x packagesbaseurl=https://artifacts.elastic.co/packages/5.x/yumgpgcheck=1gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearchenabled=1autorefresh=1type=rpm-md
安裝elasticsearch# yum install -y elasticsearch安裝java環境(java環境必須是1.8版本以上的)wget http://download.oracle.com/otn-pub/java/jdk/8u131-b11/d54c1d3a095b4ff2b6607d096fa80163/jdk-8u131-linux-x64.rpmrpm -ivh jdk-8u131-linux-x64.rpm 驗證java安裝成功java -versionjava version"1.8.0_131"Java(TM) SE Runtime Environment (build1.8.0_131-b11)Java HotSpot(TM)64-Bit Server VM (build25.131-b11, mixed mode)
創建elasticsearch data的存放目錄,并修改該目錄的屬主屬組
# mkdir -p /data/es-data? (自定義用于存放data數據的目錄)# chown -R elasticsearch:elasticsearch /data/es-data
修改elasticsearch的日志屬主屬組
#chown -R elasticsearch:elasticsearch /var/log/elasticsearch/
修改elasticsearch的配置文件
# vim /etc/elasticsearch/elasticsearch.yml找到配置文件中的cluster.name,打開該配置并設置集群名稱cluster.name: demon找到配置文件中的node.name,打開該配置并設置節點名稱node.name: elk-1修改data存放的路徑path.data: /data/es-data修改logs日志的路徑path.logs: /var/log/elasticsearch/配置內存使用用交換分區bootstrap.memory_lock: true監聽的網絡地址network.host: 0.0.0.0開啟監聽的端口http.port: 9200增加新的參數,這樣head插件可以訪問es (5.x版本,如果沒有可以自己手動加)http.cors.enabled: truehttp.cors.allow-origin: "*"啟動elasticsearch服務
啟動服務
/etc/init.d/elasticsearch startStarting elasticsearch: Java HotSpot(TM)64-BitServer VMwarning: INFO: os::commit_memory(0x0000000085330000,2060255232,0) failed;error='Cannotallocate memory' (errno=12)## Thereisinsufficient memoryforthe Java Runtime Environmenttocontinue.# Native memory allocation (mmap) failedtomap2060255232bytesforcommitting reserved memory.# Anerrorreportfilewithmore informationissaved as:# /tmp/hs_err_pid2616.log? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? [FAILED]這個報錯是因為默認使用的內存大小為2G,虛擬機沒有那么多的空間修改參數:vim /etc/elasticsearch/jvm.options-Xms512m-Xmx512m再次啟動/etc/init.d/elasticsearch start查看服務狀態,如果有報錯可以去看錯誤日志 less /var/log/elasticsearch/demon.log(日志的名稱是以集群名稱命名的)創建開機自啟動服務# chkconfig elasticsearchon
注意事項
需要修改幾個參數,不然啟動會報錯vim /etc/security/limits.conf在末尾追加以下內容(elk為啟動用戶,當然也可以指定為*)elk soft nofile 65536elk hard nofile 65536elk soft nproc 2048elk hard nproc 2048elk soft memlock unlimitedelk hard memlock unlimited繼續再修改一個參數vim /etc/security/limits.d/90-nproc.conf將里面的1024改為2048(ES最少要求為2048)*? ? ? ? ? soft? ? nproc? ? 2048另外還需注意一個問題(在日志發現如下內容,這樣也會導致啟動失敗,這一問題困擾了很久)[2017-06-14T19:19:01,641][INFO ][o.e.b.BootstrapChecks? ? ] [elk-1] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks[2017-06-14T19:19:01,658][ERROR][o.e.b.Bootstrap? ? ? ? ? ] [elk-1] node validation exception[1] bootstrap checks failed[1]:system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk? ? 解決:修改配置文件,在配置文件添加一項參數(目前還沒明白此參數的作用)vim /etc/elasticsearch/elasticsearch.yml bootstrap.system_call_filter: false
通過瀏覽器請求下9200的端口,看下是否成功
先檢查9200端口是否起來netstat -antp |grep9200tcp00:::9200:::*? ? ? ? ? ? ? ? ? ? ? ? LISTEN2934/java瀏覽器訪問測試是否正常(以下為正常)# curl http://127.0.0.1:9200/{"name":"linux-node1","cluster_name":"demon","cluster_uuid":"kM0GMFrsQ8K_cl5Fn7BF-g","version": {"number":"5.4.0","build_hash":"780f8c4","build_date":"2017-04-28T17:43:27.229Z","build_snapshot":false,"lucene_version":"6.5.0"},"tagline":"You Know, for Search"}
如何和elasticsearch交互
JavaAPIRESTfulAPIJavascript,.Net,PHP,Perl,Python利用API查看狀態# curl -i -XGET 'localhost:9200/_count?pretty' HTTP/1.1200OKcontent-type: application/json; charset=UTF-8content-length:95{"count":0,"_shards": {"total":0,"successful":0,"failed":0}? ? }
安裝插件
安裝elasticsearch-head插件安裝docker鏡像或者通過github下載elasticsearch-head項目都是可以的,1或者2兩種方式選擇一種安裝使用即可1.使用docker的集成好的elasticsearch-head# docker run -p 9100:9100 mobz/elasticsearch-head:5docker容器下載成功并啟動以后,運行瀏覽器打開http://localhost:9100/2.使用git安裝elasticsearch-head# yum install -y npm# git clone git://github.com/mobz/elasticsearch-head.git# cd elasticsearch-head# npm install# npm run start檢查端口是否起來? ? netstat -antp |grep9100瀏覽器訪問測試是否正常? ? http://IP:9100/
LogStash的使用
安裝Logstash環境:官方安裝手冊:https://www.elastic.co/guide/en/logstash/current/installing-logstash.html下載yum源的密鑰認證:# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch利用yum安裝logstash# yum install -y logstash查看下logstash的安裝目錄# rpm -ql logstash創建一個軟連接,每次執行命令的時候不用在寫安裝路勁(默認安裝在/usr/share下)ln -s /usr/share/logstash/bin/logstash /bin/執行logstash的命令#logstash -e'input { stdin { } } output { stdout {} }'運行成功以后輸入:nihaostdout返回的結果:
注:? -e? ? ? ? ? 執行操作input標準輸入? {input}? 插件output標準輸出? {stdout}? 插件? 通過rubydebug來輸出下更詳細的信息# logstash -e'input { stdin { } } output { stdout {codec => rubydebug} }'執行成功輸入:nihaostdout輸出的結果:
如果標準輸出還有elasticsearch中都需要保留應該怎么玩,看下面# /usr/share/logstash/bin/logstash -e 'input{ stdin { } }output{ elasticsearch { hosts => ["192.168.1.202:9200"] } stdout { codec => rubydebug }}'運行成功以后輸入:I am elk返回的結果(標準輸出中的結果):
logstash使用配置文件
官方指南:https://www.elastic.co/guide/en/logstash/current/configuration.html創建配置文件01-logstash.conf# vim /etc/logstash/conf.d/elk.conf文件中添加以下內容input { stdin { } }output {? elasticsearch { hosts => ["192.168.1.202:9200"] }? stdout { codec => rubydebug }}使用配置文件運行logstash# logstash -f ./elk.conf運行成功以后輸入以及標準輸出結果
logstash的數據庫類型
1.Input插件? ? 權威指南:https://www.elastic.co/guide/en/logstash/current/input-plugins.htmlfile插件的使用# vim /etc/logstash/conf.d/elk.conf添加如下配置? ? input {? ? ? ? file {? ? ? ? ? ? path =>"/var/log/messages"type =>"system"start_position =>"beginning"}? ? }? ? output {? ? ? ? ? ? ? ? elasticsearch {? ? ? ? ? ? ? ? ? ? hosts => ["192.168.1.202:9200"]? ? ? ? ? ? ? ? ? ? index =>"system-%{+YYYY.MM.dd}"}? ? }運行logstash指定elk.conf配置文件,進行過濾匹配#logstash -f /etc/logstash/conf.d/elk.conf
來一發配置安全日志的并且把日志的索引按類型做存放,繼續編輯elk.conf文件
# vim /etc/logstash/conf.d/elk.conf添加secure日志的路徑input {? ? file {? ? ? ? path =>"/var/log/messages"type =>"system"start_position =>"beginning"}? ? file {? ? ? ? path =>"/var/log/secure"type =>"secure"start_position =>"beginning"}}output {if[type] =="system"{? ? ? ? elasticsearch {? ? ? ? ? ? hosts => ["192.168.1.202:9200"]? ? ? ? ? ? index =>"nagios-system-%{+YYYY.MM.dd}"}? ? }if[type] =="secure"{? ? ? ? elasticsearch {? ? ? ? ? ? hosts => ["192.168.1.202:9200"]? ? ? ? ? ? index =>"nagios-secure-%{+YYYY.MM.dd}"}? ? }}運行logstash指定elk.conf配置文件,進行過濾匹配# logstash -f ./elk.conf
這些設置都沒有問題之后,接下來安裝下kibana,可以讓在前臺展示
Kibana的安裝及使用
安裝kibana環境官方安裝手冊:https://www.elastic.co/guide/en/kibana/current/install.html下載kibana的tar.gz的軟件包# wget https://artifacts.elastic.co/downloads/kibana/kibana-5.4.0-linux-x86_64.tar.gz解壓kibana的tar包# tar -xzf kibana-5.4.0-linux-x86_64.tar.gz進入解壓好的kibana#mv kibana-5.4.0-linux-x86_64 /usr/local創建kibana的軟連接#ln -s /usr/local/kibana-5.4.0-linux-x86_64/ /usr/local/kibana編輯kibana的配置文件#vim /usr/local/kibana/config/kibana.yml修改配置文件如下,開啟以下的配置server.port: 5601server.host: "0.0.0.0"elasticsearch.url: "http://192.168.1.202:9200"kibana.index: ".kibana" 安裝screen,以便于kibana在后臺運行(當然也可以不用安裝,用其他方式進行后臺啟動)# yum -y install screen# screen#/usr/local/kibana/bin/kibananetstat -antp |grep 5601tcp? ? ? ? 0? ? ? 0 0.0.0.0:5601? ? ? ? ? ? ? ? 0.0.0.0:*? ? ? ? ? ? ? ? ? LISTEN? ? ? 17007/node 打開瀏覽器并設置對應的indexhttp://IP:5601
二、ELK實戰篇
好,現在索引也可以創建了,現在可以來輸出nginx、apache、message、secrue的日志到前臺展示(Nginx有的話直接修改,沒有自行安裝)
編輯nginx配置文件,修改以下內容(在http模塊下添加)log_format json'{"@timestamp":"$time_iso8601",''"@version":"1",''"client":"$remote_addr",''"url":"$uri",''"status":"$status",''"domian":"$host",''"host":"$server_addr",''"size":"$body_bytes_sent",''"responsetime":"$request_time",''"referer":"$http_referer",''"ua":"$http_user_agent"''}';修改access_log的輸出格式為剛才定義的json access_log? logs/elk.access.log? json;繼續修改apache的配置文件LogFormat"{ \
? ? ? ? \"@timestamp\": \"%{%Y-%m-%dT%H:%M:%S%z}t\", \
? ? ? ? \"@version\": \"1\", \
? ? ? ? \"tags\":[\"apache\"], \
? ? ? ? \"message\": \"%h %l %u %t \\\"%r\\\"%>s %b\", \
? ? ? ? \"clientip\": \"%a\", \
? ? ? ? \"duration\": %D, \
? ? ? ? \"status\": %>s, \
? ? ? ? \"request\": \"%U%q\", \
? ? ? ? \"urlpath\": \"%U\", \
? ? ? ? \"urlquery\": \"%q\", \
? ? ? ? \"bytes\": %B, \
? ? ? ? \"method\": \"%m\", \
? ? ? ? \"site\": \"%{Host}i\", \
? ? ? ? \"referer\": \"%{Referer}i\", \
? ? ? ? \"useragent\": \"%{User-agent}i\" \
? ? ? }"ls_apache_json一樣修改輸出格式為上面定義的json格式CustomLog logs/access_log ls_apache_json編輯logstash配置文件,進行日志收集vim /etc/logstash/conf.d/full.conf? ? input {? ? file {? ? ? ? path =>"/var/log/messages"type =>"system"start_position =>"beginning"}? ? ? file {? ? ? ? path =>"/var/log/secure"type =>"secure"start_position =>"beginning"}? ? ? file {? ? ? ? path =>"/var/log/httpd/access_log"type =>"http"start_position =>"beginning"}? ? ? file {? ? ? ? path =>"/usr/local/nginx/logs/elk.access.log"type =>"nginx"start_position =>"beginning"}? }? ? output {if[type] =="system"{? ? ? ? elasticsearch {? ? ? ? ? ? hosts => ["192.168.1.202:9200"]? ? ? ? ? ? index =>"nagios-system-%{+YYYY.MM.dd}"}? ? ? ? ? }if[type] =="secure"{? ? ? ? elasticsearch {? ? ? ? ? ? hosts => ["192.168.1.202:9200"]? ? ? ? ? ? index =>"nagios-secure-%{+YYYY.MM.dd}"}? ? }if[type] =="http"{? ? ? ? elasticsearch {? ? ? ? ? ? hosts => ["192.168.1.202:9200"]? ? ? ? ? ? index =>"nagios-http-%{+YYYY.MM.dd}"}? ? }if[type] =="nginx"{? ? ? ? elasticsearch {? ? ? ? ? ? hosts => ["192.168.1.202:9200"]? ? ? ? ? ? index =>"nagios-nginx-%{+YYYY.MM.dd}"}? ? }}運行看看效果如何logstash -f /etc/logstash/conf.d/full.conf
可以發現所有創建日志的索引都已存在,接下來就去Kibana創建日志索引,進行展示(按照上面的方法進行創建索引即可),看下展示的效果
接下來再來一發MySQL慢日志的展示
由于MySQL的慢日志查詢格式比較特殊,所以需要用正則進行匹配,并使用multiline能夠進行多行匹配(看具體配置)input {? ? file {? ? ? ? path =>"/var/log/messages"type =>"system"start_position =>"beginning"}? ? ? file {? ? ? ? path =>"/var/log/secure"type =>"secure"start_position =>"beginning"}? ? ? file {? ? ? ? path =>"/var/log/httpd/access_log"type =>"http"start_position =>"beginning"}? ? ? file {? ? ? ? path =>"/usr/local/nginx/logs/elk.access.log"type =>"nginx"start_position =>"beginning"}? ? ? ? ? file {? ? ? ? path =>"/var/log/mysql/mysql.slow.log"type =>"mysql"start_position =>"beginning"codec => multiline {? ? ? ? ? ? pattern =>"^# User@Host:"negate =>truewhat =>"previous"}? ? }}filter {? ? grok {? ? ? ? match => {"message"=>"SELECT SLEEP"}? ? ? ? ? ? add_tag => ["sleep_drop"]? ? ? ? ? ? tag_on_failure => []? ? }if"sleep_drop"in[tags] {? ? ? ? ? ? drop {}? ? }? ? ? ? grok {? ? ? ? match => {"message"=>"(?m)^# User@Host: %{USER:User}\[[^\]]+\] @ (?:(?<clienthost>\S*) )?\[(?:%{IP:Client_IP})?\]\s.*# Query_time: %{NUMBER:Query_Time:float}\s+Lock_time: %{NUMBER:Lock_Time:float}\s+Rows_sent: %{NUMBER:Rows_Sent:int}\s+Rows_examined: %{NUMBER:Rows_Examined:int}\s*(?:use %{DATA:Database};\s*)?SET timestamp=%{NUMBER:timestamp};\s*(?<Query>(?<Action>\w+)\s+.*)\n# Time:.*$"}? ? ? ? }? ? ? date {? ? ? ? ? ? match => ["timestamp","UNIX"]? ? ? ? ? ? remove_field => ["timestamp"]? ? }? }? output {if[type] =="system"{? ? ? ? elasticsearch {? ? ? ? ? ? hosts => ["192.168.1.202:9200"]? ? ? ? ? ? index =>"nagios-system-%{+YYYY.MM.dd}"}? ? ? ? ? }if[type] =="secure"{? ? ? ? elasticsearch {? ? ? ? ? ? hosts => ["192.168.1.202:9200"]? ? ? ? ? ? index =>"nagios-secure-%{+YYYY.MM.dd}"}? ? }if[type] =="http"{? ? ? ? elasticsearch {? ? ? ? ? ? hosts => ["192.168.1.202:9200"]? ? ? ? ? ? index =>"nagios-http-%{+YYYY.MM.dd}"}? ? }if[type] =="nginx"{? ? ? ? elasticsearch {? ? ? ? ? ? hosts => ["192.168.1.202:9200"]? ? ? ? ? ? index =>"nagios-nginx-%{+YYYY.MM.dd}"}? ? }if[type] =="mysql"{? ? ? ? elasticsearch {? ? ? ? ? ? hosts => ["192.168.1.202:9200"]? ? ? ? ? ? index =>"nagios-mysql-slow-%{+YYYY.MM.dd}"}? ? }}
查看效果(一條慢日志查詢會顯示一條,如果不進行正則匹配,那么一行就會顯示一條)
具體的日志輸出需求,進行具體的分析
三:ELK終極篇
安裝reids# yum install -y redis修改redis的配置文件# vim /etc/redis.conf修改內容如下daemonize yesbind192.168.1.202啟動redis服務# /etc/init.d/redis restart測試redis的是否啟用成功# redis-cli -h 192.168.1.202輸入info如果有不報錯即可redis192.168.1.202:6379> inforedis_version:2.4.10....編輯配置redis-out.conf配置文件,把標準輸入的數據存儲到redis中# vim /etc/logstash/conf.d/redis-out.conf添加如下內容input {? ? ? ? ? ? stdin {}}output {? ? ? ? redis {? ? ? ? ? ? ? ? host =>"192.168.1.202"port =>"6379"password =>'test'db =>'1'data_type =>"list"key =>'elk-test'}}? 運行logstash指定redis-out.conf的配置文件# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis-out.conf
運行成功以后,在logstash中輸入內容(查看下效果)
編輯配置redis-in.conf配置文件,把reids的存儲的數據輸出到elasticsearch中# vim /etc/logstash/conf.d/redis-out.conf添加如下內容input{? ? redis {? ? ? ? ? ? ? ? ? ? host =>"192.168.1.202"port =>"6379"password =>'test'db =>'1'data_type =>"list"key =>'elk-test'batch_count =>1#這個值是指從隊列中讀取數據時,一次性取出多少條,默認125條(如果redis中沒有125條,就會報錯,所以在測試期間加上這個值)}}output {? ? ? ? elasticsearch {? ? ? ? ? ? ? ? hosts => ['192.168.1.202:9200']? ? ? ? ? ? ? ? index =>'redis-test-%{+YYYY.MM.dd}'}}運行logstash指定redis-in.conf的配置文件# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis-out.conf
把之前的配置文件修改一下,變成所有的日志監控的來源文件都存放到redis中,然后通過redis在輸出到elasticsearch中更改為如下,編輯full.confinput {? ? file {? ? ? ? ? ? path =>"/var/log/httpd/access_log"type =>"http"start_position =>"beginning"}? ? file {? ? ? ? ? ? path =>"/usr/local/nginx/logs/elk.access.log"type =>"nginx"start_position =>"beginning"}? ? file {? ? ? ? ? ? path =>"/var/log/secure"type =>"secure"start_position =>"beginning"}? ? file {? ? ? ? ? ? path =>"/var/log/messages"type =>"system"start_position =>"beginning"}}output {if[type] =="http"{? ? ? ? redis {? ? ? ? ? ? host =>"192.168.1.202"password =>'test'port =>"6379"db =>"6"data_type =>"list"key =>'nagios_http'}? ? }if[type] =="nginx"{? ? ? ? redis {? ? ? ? ? ? host =>"192.168.1.202"password =>'test'port =>"6379"db =>"6"data_type =>"list"key =>'nagios_nginx'}? ? }if[type] =="secure"{? ? ? ? redis {? ? ? ? ? ? host =>"192.168.1.202"password =>'test'port =>"6379"db =>"6"data_type =>"list"key =>'nagios_secure'}? ? }if[type] =="system"{? ? ? ? redis {? ? ? ? ? ? host =>"192.168.1.202"password =>'test'port =>"6379"db =>"6"data_type =>"list"key =>'nagios_system'}? ? }} 運行logstash指定shipper.conf的配置文件# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/full.conf在redis中查看是否已經將數據寫到里面(有時候輸入的日志文件不產生日志,會導致redis里面也沒有寫入日志)
把redis中的數據讀取出來,寫入到elasticsearch中(需要另外一臺主機做實驗)編輯配置文件# vim /etc/logstash/conf.d/redis-out.conf添加如下內容input {? ? redis {? ? ? ? type =>"system"host =>"192.168.1.202"password =>'test'port =>"6379"db =>"6"data_type =>"list"key =>'nagios_system'batch_count =>1}? ? ? ? redis {? ? ? ? type =>"http"host =>"192.168.1.202"password =>'test'port =>"6379"db =>"6"data_type =>"list"key =>'nagios_http'batch_count =>1}? ? redis {? ? ? ? type =>"nginx"host =>"192.168.1.202"password =>'test'port =>"6379"db =>"6"data_type =>"list"key =>'nagios_nginx'batch_count =>1}? ? ? ? redis {? ? ? ? type =>"secure"host =>"192.168.1.202"password =>'test'port =>"6379"db =>"6"data_type =>"list"key =>'nagios_secure'batch_count =>1}}? ? output {if[type] =="system"{? ? ? ? elasticsearch {? ? ? ? ? ? hosts => ["192.168.1.202:9200"]? ? ? ? ? ? index =>"nagios-system-%{+YYYY.MM.dd}"}? ? }if[type] =="http"{? ? ? ? elasticsearch {? ? ? ? ? ? hosts => ["192.168.1.202:9200"]? ? ? ? ? ? index =>"nagios-http-%{+YYYY.MM.dd}"}? ? ? }if[type] =="nginx"{? ? ? ? elasticsearch {? ? ? ? ? ? hosts => ["192.168.1.202:9200"]? ? ? ? ? ? index =>"nagios-nginx-%{+YYYY.MM.dd}"}? ? ? }if[type] =="secure"{? ? ? ? elasticsearch {? ? ? ? ? ? hosts => ["192.168.1.202:9200"]? ? ? ? ? ? index =>"nagios-secure-%{+YYYY.MM.dd}"}? ? ? }? }注意:input是從客戶端收集的output是同樣也保存到192.168.1.202中的elasticsearch中,如果要保存到當前的主機上,可以把output中的hosts修改成localhost,如果還需要在kibana中顯示,需要在本機上部署kabana,為何要這樣做,起到一個松耦合的目的說白了,就是在客戶端收集日志,寫到服務端的redis里或是本地的redis里面,輸出的時候對接ES服務器即可運行命令看看效果# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis-out.conf
效果是和直接往ES服務器輸出一樣的(這樣是先將日志存到redis數據庫,然后再從redis數據庫里取出日志)
上線ELK
1. 日志分類? ? ? ? 系統日志? ? ? ? rsyslog? ? logstash? ? syslog插件? ? ? ? 訪問日志? ? ? ? nginx? ? ? logstash? ? codec? json? ? ? ? 錯誤日志filelogstash? ? mulitline? ? ? ? 運行日志filelogstash? ? codec? json? ? ? ? 設備日志? ? ? ? syslog? ? ? logstash? ? syslog插件? ? ? ? Debug日志filelogstash? json 或者 mulitline2.? 日志標準化? ? ? ? 路徑? ? ? 固定? ? ? ? 格式? ? ? 盡量json3.? 系統個日志開始-->錯誤日志-->運行日志-->訪問日志
因為ES保存日志是永久保存,所以需要定期刪除一下日志,下面命令為刪除指定時間前的日志
curl-X DELETE http://xx.xx.com:9200/logstash-*-`date +%Y-%m-%d -d"-$ndays"`