ELK5.5.0+Filebeat7簡單搭建總結

1、本次配置情況

Elasticsearch:5.5.0

Logstash:5.5.0

Filebeat:7.0.0

OS:ELK安裝在CentOS7上,Filebeat部署在win10

2、ELK安裝在此不表,默認Elasticsearch端口9200

3、Filebeat下載安裝:

FileBeat 是用來替代LogStash-Forwarding的一個組件,是一個輕量級的日志采集器,相比于LogStash-Forwarding日志采集器,Filebeat對系統資源的占用低很多。我這里用到的服務器是Windows,當然也可以放在Linux機器上,配置方法一致。

首先要下載filebeat,下載地址為:https://www.elastic.co/downloads/beats/filebeat#ga-release。這里把安裝和使用步驟寫的很清楚,我就不贅述了,簡單來說就是下載,修改配置文件,啟動,觀察。所以,請看圖:


我下載的是最新的7.0版本,解壓即可。


解壓后圖片

4、配置Filebeat,在解壓目錄下有個filebeat.yml

? ? 直接貼配置文件吧:

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common

# options. The filebeat.reference.yml file from the same directory contains all the

# supported options with more comments. You can use it as a reference.

#

# You can find the full configuration reference here:

# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample

# configuration file.

#=========================== Filebeat inputs =============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so

# you can use different inputs for various configurations.

# Below are the input specific configurations.

- type: log

? # Change to true to enable this input configuration.

? enabled: true

? # Paths that should be crawled and fetched. Glob based paths.

? paths:

? ? #- /var/log/*.log

? ? #- c:\programdata\elasticsearch\logs\*

? ? - d:\bak\*.log

? # Exclude lines. A list of regular expressions to match. It drops the lines that are

? # matching any regular expression from the list.

? #exclude_lines: ['^DBG']

? # Include lines. A list of regular expressions to match. It exports the lines that are

? # matching any regular expression from the list.

? #include_lines: ['^ERR', '^WARN']

? # Exclude files. A list of regular expressions to match. Filebeat drops the files that

? # are matching any regular expression from the list. By default, no files are dropped.

? #exclude_files: ['.gz$']

? # Optional additional fields. These fields can be freely picked

? # to add additional information to the crawled log files for filtering

? #fields:

? #? level: debug

? #? review: 1

? ### Multiline options

? # Multiline can be used for log messages spanning multiple lines. This is common

? # for Java Stack Traces or C-Line Continuation

? # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [

? #multiline.pattern: ^\[

? # Defines if the pattern set under pattern should be negated or not. Default is false.

? #multiline.negate: false

? # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern

? # that was (not) matched before or after or as long as a pattern is not matched based on negate.

? # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash

? #multiline.match: after

#============================= Filebeat modules ===============================

filebeat.config.modules:

? # Glob pattern for configuration loading

? path: ${path.config}/modules.d/*.yml

? # Set to true to enable config reloading

? reload.enabled: false

? # Period on which files under path should be checked for changes

? #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:

? index.number_of_shards: 1

? #index.codec: best_compression

? #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group

# all the transactions sent by a single shipper in the web interface.

#name:

# The tags of the shipper are included in their own field with each

# transaction published.

#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the

# output.

#fields:

#? env: staging

#============================== Dashboards =====================================

# These settings control loading the sample dashboards to the Kibana index. Loading

# the dashboards is disabled by default and can be enabled either by setting the

# options here or by using the `setup` command.

#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL

# has a value which is computed based on the Beat name and version. For released

# versions, this URL points to the dashboard archive on the artifacts.elastic.co

# website.

#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.

# This requires a Kibana endpoint configuration.

setup.kibana:

? # Kibana Host

? # Scheme and port can be left out and will be set to the default (http and 5601)

? # In case you specify and additional path, the scheme is required: http://localhost:5601/path

? # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601

? #host: "localhost:5601"

? # Kibana Space ID

? # ID of the Kibana Space into which the dashboards should be loaded. By default,

? # the Default Space will be used.

? #space.id:

#============================= Elastic Cloud ==================================

# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and

# `setup.kibana.host` options.

# You can find the `cloud.id` in the Elastic Cloud web UI.

#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and

# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.

#cloud.auth:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------

#output.elasticsearch:

? # Array of hosts to connect to.

? #hosts: ["localhost:9200"]

? # Optional protocol and basic auth credentials.

? #protocol: "https"

? #username: "elastic"

? #password: "changeme"

#----------------------------- Logstash output --------------------------------

output.logstash:

? # The Logstash hosts

? hosts: ["127.0.0.1:5044"]

? # Optional SSL. By default is off.

? # List of root certificates for HTTPS server verifications

? #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

? # Certificate for SSL client authentication

? #ssl.certificate: "/etc/pki/client/cert.pem"

? # Client Certificate Key

? #ssl.key: "/etc/pki/client/cert.key"

#================================ Processors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:

? - add_host_metadata: ~

? - add_cloud_metadata: ~

#================================ Logging =====================================

# Sets log level. The default log level is info.

# Available log levels are: error, warning, info, debug

#logging.level: debug

# At debug level, you can selectively enable logging only for some components.

# To enable all selectors use ["*"]. Examples of other selectors are "beat",

# "publish", "service".

#logging.selectors: ["*"]

#============================== Xpack Monitoring ===============================

# filebeat can export internal metrics to a central Elasticsearch monitoring

# cluster.? This requires xpack monitoring to be enabled in Elasticsearch.? The

# reporting is disabled by default.

# Set to true to enable the monitoring reporter.

#xpack.monitoring.enabled: false

# Uncomment to send the metrics to Elasticsearch. Most settings from the

# Elasticsearch output are accepted here as well. Any setting that is not set is

# automatically inherited from the Elasticsearch output configuration, so if you

# have the Elasticsearch output configured, you can simply uncomment the

# following line.

#xpack.monitoring.elasticsearch:

#================================= Migration ==================================

# This allows to enable 6.7 migration aliases

#migration.6_to_7.enabled: true

5、啟動filebeat

D:\filebeat>filebeat -e -c filebeat.yml

5、創建Logstash配置文件,在Logstash目錄下創建 logstash-log.conf

input{

? ? beats{

? ? ? ? codec => plain{charset => "UTF-8"}

? ? ? ? port => "5044"

? ? }

}

filter{

? ? mutate{

? ? ? ? remove_field => "@version"

? ? ? ? remove_field => "offset"

? ? ? ? remove_field => "input_type"

? ? ? ? remove_field => "beat"

? ? ? ? remove_field => "tags"

? ? }

? ? ruby{

? ? ? ? code => "event.timestamp.time.localtime"

? ? }

}

output{

? ? elasticsearch{

? ? ? ? codec => plain{charset => "UTF-8"}

? ? ? ? hosts => "10.239.95.20:9200"

? ? }

}

注意,這里沒有注明logstash的index,系統會自動創建一個叫logstash的index

啟動logstash:

./bin/logstash -f ./logstash-log.conf

6、測試,在D:/bak/test.log里面寫入數據,通過node查看結果


logstash


日志


至此一個基于Filebeat簡單的日志監控基本完成。

?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。

推薦閱讀更多精彩內容