ELK-6.2部署及實(shí)戰(zhàn)

架構(gòu)設(shè)計(jì):

設(shè)計(jì)架構(gòu)

這是來到新公司后搭建的,目的是滿足日均百G的存儲(chǔ)量。之前使用5.X版本時(shí)一般使用redis的list模式,由于redis集群具有一定的不兼容性,所以本次搭建將中間的消息隊(duì)列更換為能滿足高吞吐量的kafka,來提高數(shù)據(jù)收集的速率。


搭建:

系統(tǒng)版本:centos7.3????????

IP? ? ? ? ? ? ? ? ? ? ? hostname? ? ? 配置 ???????????????????? 部署服務(wù)????????????????

10.10.97.64? ? ? ? ? elk-1 ???? 16核16G ???? zookeeper,kafka,es,logstash

10.10.97.65? ? ? ? ? elk-2? ? ? ? 8核16G ???? zookeeper,kafka,es,kibana

10.10.97.66? ? ? ? ? elk-3 ???? 16核16G ???? zookeeper,kafka,es,logstash

備注:實(shí)際部署過程中,不推薦將多個(gè)服務(wù)放在一臺(tái)機(jī)子上。


一、初始化環(huán)境

1、修改主機(jī)名

hostnamectl --static set-hostname elk-1

2、關(guān)閉防火墻及SELinux

systemctl stop firewalld.service

systemctl disable firewalld.service

vi /etc/selinux/config

????SELINUX=disabled

3、安裝jdk1.8

rpm -vih jdk-8u151-linux-x64.rpm

4、系統(tǒng)參數(shù)設(shè)置

vi + /etc/security/limits.conf

* soft nofile 65536

* hard nofile 131072

* soft nproc 2048

* hard nproc 4096

vi + /etc/sysctl.conf

vm.max_map_count=655360

sysctl -p

5、根據(jù)需要掛載數(shù)據(jù)盤

mkdir /data

mkfs.xfs -f /dev/xvdb

mount -t xfs /dev/xvdb /data

echo "/dev/xvdb? ? ? ? ? ? ? /data? ? ? ? ? ? ? ? ? xfs? ? defaults? ? ? ? 0 0" >> /etc/fstab

6、創(chuàng)建相關(guān)目錄

mkdir /data/es /data/logs /data/zookeeper /data/logs/kafka /data/logs/logstash -p

????????/data/logs 存放各類日志

????????/data/es 存放es數(shù)據(jù)

????????/data/logs/kafka 存放kafkas數(shù)據(jù)

????????/data/zookeeper 存放zookeeper數(shù)據(jù)

????????/data/logs/logstash 存放logstash日志

7、下載安裝包

cd /tmp

wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.2.2.tar.gz

wget https://artifacts.elastic.co/downloads/logstash/logstash-6.2.2.tar.gz

wget https://artifacts.elastic.co/downloads/kibana/kibana-6.2.2-linux-x86_64.tar.gz

wget https://archive.apache.org/dist/kafka/0.10.0.0/kafka_2.11-0.10.0.0.tgz

wget https://archive.apache.org/dist/zookeeper/zookeeper-3.4.8/zookeeper-3.4.8.tar.gz

scp /tmp/* elk-*:/tmp/ 集群同步

8、解壓:

tar zxf elasticsearch-6.2.2.tar.gz

tar zxf logstash-6.2.2.tar.gz

tar zxf kibana-6.2.2-linux-x86_64.tar.gz

tar zxf kafka_2.11-0.10.0.0.tgz

tar zxf zookeeper-3.4.8.tar.gz

cd /opt

mv /tmp/elasticsearch-6.2.2? ./elasticsearch

mv /tmp/logstash-6.2.2? ./logstash

mv /tmp/kibana-6.2.2-linux-x86_64 ./kibana

mv /tmp/zookeeper-3.4.8 ./zookeeper

mv /tmp/kafka_2.11-0.10.0.0 ./kafka

9、添加hosts

vi + /etc/hosts

10.10.97.64 elk-1

10.10.97.65? ? elk-2

10.10.97.66? ? elk-3


二、zookeeper集群搭建

1、修改主配置

vim? /opt/zookeeper/conf/zoo.cfg

===============多節(jié)點(diǎn)相同配置===============

tickTime=2000

initLimit=10

syncLimit=5

dataDir=/data/zookeeper

dataLogDir=/data/logs

clientPort=2181

maxClientCnxns=1000

autopurge.snapRetainCount=2

utopurge.purgeInterval=1

server.1=10.10.97.64:2888:3888

server.2=10.10.97.65:2888:3888

server.3=10.10.97.66:2888:3888

===============END===============

2、修改日志配置

vim /opt/zookeeper/conf/log4j.properties

zookeeper.log.dir=/data/logs/

vim +124 /opt/zookeeper/bin/zkServer.sh 【125 行添加】

export ZOO_LOG_DIR=/data/logs

3、為各節(jié)點(diǎn)創(chuàng)建集群唯一標(biāo)識(shí)文件【對(duì)應(yīng)配置文件上的service.id】

elk-1服務(wù)器:? echo "1" > /data/zookeeper/myid

elk-2服務(wù)器:? echo "2" > /data/zookeeper/myid

elk-3服務(wù)器:? echo "3" > /data/zookeeper/myid

4、啟動(dòng)

sh /opt/zookeeper/bin/zkServer.sh start

5、驗(yàn)證:

sh /opt/zookeeper/bin/zkServer.sh status 【一個(gè)為:leader(用為2888端口),其他為:follower】

三、kafka集群搭建

備注:此處使用kafka_2.11-0.10版本,kafka_2.10-0.9版本無法正常運(yùn)行。

1、限定內(nèi)存使用

vim +16 /opt/kafka/bin/kafka-server-start.sh? 【根據(jù)機(jī)器配置優(yōu)化】

export KAFKA_HEAP_OPTS="-Xms6g -Xmx6g"

2、修改主配置

mv? /opt/kafka/config/server.properties? /opt/kafka/config/server.properties.bak

vim /opt/kafka/config/server.properties

===============elk-1配置===============

broker.id=0

port=9092

host.name=10.10.97.64

num.network.threads=3

num.io.threads=8

socket.send.buffer.bytes=102400

socket.receive.buffer.bytes=102400

socket.request.max.bytes=104857600

log.dirs=/data/logs/kafka

log.retention.hours=72

message.max.byte=5242880

default.replication.factor=2

replica.fetch.max.bytes=5242880

num.partitions=2

num.recovery.threads.per.data.dir=1

log.segment.bytes=1073741824

log.retention.check.interval.ms=300000

log.cleaner.enable=false

zookeeper.connect=10.10.97.64:2181,10.10.7.65:2181,10.10.7.66:2181

zookeeper.connection.timeout.ms=6000

===============elk-2配置===============

broker.id=1

port=9092

host.name=10.10.99.95

num.network.threads=3

num.io.threads=8

socket.send.buffer.bytes=102400

socket.receive.buffer.bytes=102400

socket.request.max.bytes=104857600

log.dirs=/data/logs/kafka

log.retention.hours=72

message.max.byte=5242880

default.replication.factor=2

replica.fetch.max.bytes=5242880

num.partitions=2

num.recovery.threads.per.data.dir=1

log.segment.bytes=1073741824

log.retention.check.interval.ms=300000

log.cleaner.enable=false

zookeeper.connect=10.10.99.104:2181,10.10.99.95:2181,10.10.99.111:2181

zookeeper.connection.timeout.ms=6000

===============elk-3配置===============

broker.id=2

port=9092

host.name=10.10.97.66

num.network.threads=3

num.io.threads=8

socket.send.buffer.bytes=102400

socket.receive.buffer.bytes=102400

socket.request.max.bytes=104857600

log.dirs=/data/logs/kafka

log.retention.hours=72

message.max.byte=5242880

default.replication.factor=2

replica.fetch.max.bytes=5242880

num.partitions=2

num.recovery.threads.per.data.dir=1

log.segment.bytes=1073741824

log.retention.check.interval.ms=300000

log.cleaner.enable=false

zookeeper.connect=10.10.99.104:2181,10.10.99.95:2181,10.10.99.111:2181

zookeeper.connection.timeout.ms=6000

===============END===============

3、啟動(dòng):

nohup /opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/server.properties >> /data/logs/kafka-nohup.out 2>&1 &

4、驗(yàn)證:

4.1、查看端口號(hào)? netstat |grep 9092

4.2、創(chuàng)建topic

/opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test

4.3、查看topic是否創(chuàng)建成果

/opt/kafka/bin/kafka-topics.sh -list -zookeeper localhost:2181

4.4、驗(yàn)證可用性

1、服務(wù)端生產(chǎn)消息

/opt/kafka/bin/kafka-console-producer.sh --broker-list 10.10.97.64:9092 --topic test

2、客戶端消費(fèi)消息

/opt/kafka/bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning

四、elasticsearch集群搭建

useradd es

chown es: -R elasticsearch

chown es: -R /data/es

chown root:es /data/logs

chmod 775 /data/logs

1、修改主配置文件

mv? /opt/elasticsearch/config/elasticsearch.yml? /opt/elasticsearch/config/elasticsearch.yml.bak

vi /opt/elasticsearch/config/elasticsearch.yml

===============elk-1配置===============

cluster.name: elk-es

node.name: elk-1

node.attr.rack: r1

path.data: /data/es

path.logs: /data/logs

network.host: 10.10.97.64

http.port: 9200

transport.tcp.port: 9300

discovery.zen.ping.unicast.hosts: ["10.10.97.65", "10.10.97.66","[::1]"]

discovery.zen.minimum_master_nodes: 2

action.destructive_requires_name: true

thread_pool.bulk.queue_size: 1000

===============elk-2配置===============

cluster.name: elk-es

node.name: elk-2

node.attr.rack: r1

path.data: /data/es

path.logs: /data/logs

network.host: 10.10.97.65

http.port: 9200

transport.tcp.port: 9300

discovery.zen.ping.unicast.hosts: ["10.10.97.64", "10.10.97.66","[::1]"]

discovery.zen.minimum_master_nodes: 2

action.destructive_requires_name: true

===============elk-3配置===============

cluster.name: elk-es

node.name: elk-3

node.attr.rack: r1

path.data: /data/es

path.logs: /data/logs

network.host: 10.10.97.66

http.port: 9200

transport.tcp.port: 9300

discovery.zen.ping.unicast.hosts: ["10.10.97.64", "10.10.97.65","[::1]"]

discovery.zen.minimum_master_nodes: 2

action.destructive_requires_name: true

===============END===============

2、修改限制內(nèi)存參數(shù)【根據(jù)配置優(yōu)化】

vim /opt/elasticsearch/config/jvm.options

-Xms6g

-Xmx6g

3、啟動(dòng):

sudo su - es -c? "/opt/elasticsearch/bin/elasticsearch -d"

4、驗(yàn)證:

tail -fn111 /data/logs/elk-es.log

4.1、查看服務(wù)

netstat -lntp|grep "9200\|9300"

4.2、查看程序信息

curl? localhost:9200

4.3、查看集群狀態(tài)

curl? http://10.10.97.64:9200/_cat/health?v

五、kibana搭建

mv /opt/kibana/config/kibana.yml /opt/kibana/config/kibana.yml.bak

vi /opt/kibana/config/kibana.yml

server.port: 5601

server.host: "0.0.0.0"

elasticsearch.url: "http://10.10.97.64:9200"

1、啟動(dòng):

nohup /opt/kibana/bin/kibana >> /data/logs/kibana-nohup.out 2>&1 &

2、驗(yàn)證:

curl localhost:5601 -I 【返回200則成功】

六、logstash搭建

1、修改logstash可用內(nèi)存數(shù)量(根據(jù)機(jī)器配置設(shè)定)

vi /opt/logstash/config/jvm.options

-Xms4g

-Xmx4g

vi /opt/logstash/bin/logstash.conf(ELK-6起取消了document_type,過濾類型配置有變化)

===================logstash-配置參考=======================

input {

? ? ? ? kafka {

? ? ? ? ? ? ? ? codec => "json"

? ? ? ? ? ? ? ? topics => ["test"]

? ? ? ? ? ? ? ? bootstrap_servers => "10.10.97.64:9092,10.10.97.65:9092,10.10.97.66:9092"

? ? ? ? ? ? ? ? auto_offset_reset => "latest"

? ? ? ? ? ? ? ? group_id => "logstash-gl"

? ? ? ? }

}

filter {

? ? ? ? mutate {

? ? ? ? ? remove_field => "beat.name"

? ? ? ? ? remove_field => "beat.version"

? ? ? ? ? remove_field => "@version"

? ? ? ? ? remove_field => "offset"

? ? ? ? ? remove_field => "fields.service"

? ? ? ? }

}

output {

? ? ? ? elasticsearch {

? ? ? ? ? ? ? ? hosts => ["10.10.97.64:9200","10.10.97.65:9200","10.10.97.66:9200"]

? ? ? ? ? ? ? ? index => "logstash-%{[fields][service]}-%{+YYYY.MM.dd}"

? ? ? ? ? ? ? ? document_type => "%{[fields][service]}"

? ? ? ? ? ? ? ? workers => 1

? ? ? ? }

? ? ? ? stdout {codec => rubydebug}

}

==================================END======================================

2、啟動(dòng):

nohup /opt/logstash/bin/logstash -f /opt/logstash/bin/logstash.conf >> /data/logs/logstash/nohup.out 2>&1 &

3、驗(yàn)證:

netstat -lntp |grep 9600

tailf? /data/logs/logstash/nohup.out

七、客戶端配置(日志采集):

cd /opt

wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.2-linux-x86_64.tar.gz

tar zxf filebeat-6.2.2-linux-x86_64.tar.gz

mv filebeat-6.2.2-linux-x86_64 filebeat

rm -fr filebeat-6.2.2-linux-x86_64.tar.gz

mkdir /data/logs/filebeat

>/opt/filebeat/filebeat.yml

vi? /opt/filebeat/filebeat.yml

======================filebeat-配置參考========================

filebeat.prospectors:

- input_type: log

? paths:

? ? - /data/logs/jetty_9504/run*.log

? ignore_older: "24h"

? fields:

? ? ? service: jetty_srm

? multiline.pattern:? '^[0-9]{4}-[0-9]{2}-[0-9]{2}'

? multiline.negate: true

? multiline.match: after

? multiline.timeout: 10s

? multiline.max_lines: 1000

- input_type: log

? paths:

? ? - /data/logs/jetty_9505/run*.log

? ignore_older: "24h"

? fields:

? ? ? service: jetty_crm

? multiline.pattern:? '^[0-9]{4}-[0-9]{2}-[0-9]{2}'

? multiline.negate: true

? multiline.match: after

? multiline.timeout: 10s

? multiline.max_lines: 1000

max_bytes: 104857600

tail_files: true

output.kafka:

? ? enabled: true

? ? hosts: ["10.10.97.64:9092","10.10.97.65:9092","10.10.97.66:9092"]

? ? topic: test

? ? timeout: 10

=============================END===============================

啟動(dòng):

nohup /opt/filebeat/filebeat -e -c /opt/filebeat/filebeat.yml >> /data/logs/filebeat-nohup.out 2>&1 &

驗(yàn)證:

1、netstat -lntp |grep 5601

2、tailf /data/logs/filebeat-nohup.out


部署就到此結(jié)束啦。

今天剛完成nginx日志的可視化,分享下,地圖。

30分鐘數(shù)據(jù)量


熱點(diǎn)圖

總結(jié):ELK6.X體驗(yàn)還是比5好很多,感覺kibana6界面順暢多了,其次是kibana6界面上多了很多的功能,如APM、深度學(xué)習(xí)等,都想去玩一下,無奈最近太忙了。如果你看到這里,這篇文章可能就對(duì)你有些用處了,上面的配置文件可以在我的github上獲取到,自行修改IP即可。更多玩法歡迎一起探討:QQ~546020152

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡書系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

推薦閱讀更多精彩內(nèi)容