版本:FileBeat 6.3
問題
項目中配置了生成JSON格式的日志,方便導入到ELK中,生成格式如下:
{
"@timestamp":"2018-06-29T16:24:27.555+08:00",
"severity":"INFO",
"service":"osg-sender",
"trace":"",
"span":"",
"parent":"",
"exportable":"",
"pid":"4620",
"thread":"restartedMain",
"class":"o.s.c.a.AnnotationConfigApplicationContext",
"rest":"Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@7a7df24c: startup date [Fri Jun 29 16:24:27 CST 2018]; root of context hierarchy"
}
然后再通過FileBeat直接將json文件的每一行導入到ElasticSearch中,但是FileBeat會自動生成 @timestamp 字段,代表導入時間,這樣就和LOG中的該字段沖突了。
解決辦法
兩種方式:
- 將LOG文件的@timestamp字段換個名字,比如 logDate,避免和FileBeat中的沖突,此時要為logDate在FileBeat的 fields.yml中添加索引字段配置,添加類型為日期的logDate字段,否則在Kibana中創建index pattern時無法看到該time filter。
- 不改變LOG內容,直接修改FileBeat的 fields.yml,把他原來的 @timestamp 字段改個名字,并增加 @timestamp 字段為日志使用。
我使用的第二種方式,修改fields.yml
- key: beat
title: Beat
description: >
Contains common beat fields available in all event types.
fields:
- name: beat.name
description: >
The name of the Beat sending the log messages. If the Beat name is
set in the configuration file, then that value is used. If it is not
set, the hostname is used. To set the Beat name, use the `name`
option in the configuration file.
- name: beat.hostname
description: >
The hostname as returned by the operating system on which the Beat is
running.
- name: beat.timezone
description: >
The timezone as returned by the operating system on which the Beat is
running.
- name: beat.version
description: >
The version of the beat that generated this event.
- name: "@timestamp-beat"
type: date
required: true
format: date
example: August 26th 2016, 12:35:53.332
description: >
The timestamp when the event log record was generated.
- name: "@timestamp"
type: date
format: "yyyy-MM-dd'T'HH:mm:ss.SSSZZ"
將原來的@timestamp改成了@timestamp-beat,增加了新的@timestamp并指定了日期格式。
這樣在創建index pattern的時候,就能看到兩個時間過濾器了,一個是日志生成的時間,一個是filebeat導入的時間。
image.png
配置
貼一下項目中log的配置文件,以及FileBeat的配置
logback-spring.xml
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<include resource="org/springframework/boot/logging/logback/defaults.xml"/>
?
<springProperty scope="context" name="springAppName" source="spring.application.name"/>
<!-- Example for logging into the build folder of your project -->
<property name="LOG_FILE" value="${BUILD_FOLDER:-log}/${springAppName}"/>?
<!-- You can override this to have a custom pattern -->
<property name="CONSOLE_LOG_PATTERN" value="%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}"/>
<!-- Appender to log to console -->
<appender name="console" class="ch.qos.logback.core.ConsoleAppender">
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<!-- Minimum logging level to be presented in the console logs-->
<level>DEBUG</level>
</filter>
<encoder>
<pattern>${CONSOLE_LOG_PATTERN}</pattern>
<charset>utf8</charset>
</encoder>
</appender>
<!-- Appender to log to file -->?
<appender name="flatfile" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${LOG_FILE}</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${LOG_FILE}.%d{yyyy-MM-dd}.gz</fileNamePattern>
<maxHistory>90</maxHistory>
</rollingPolicy>
<encoder>
<pattern>${CONSOLE_LOG_PATTERN}</pattern>
<charset>utf8</charset>
</encoder>
</appender>
?
<!-- Appender to log to file in a JSON format -->
<appender name="logstash" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${LOG_FILE}.json</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${LOG_FILE}.json.%d{yyyy-MM-dd}</fileNamePattern>
<maxHistory>90</maxHistory>
</rollingPolicy>
<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<timestamp>
<!--<fieldName>logDate</fieldName>-->
<!--<pattern>yyyy-MM-dd HH:mm:ss.SSS</pattern>-->
</timestamp>
<pattern>
<pattern>
{
"severity": "%level",
"service": "${springAppName:-}",
"trace": "%X{X-B3-TraceId:-}",
"span": "%X{X-B3-SpanId:-}",
"parent": "%X{X-B3-ParentSpanId:-}",
"exportable": "%X{X-Span-Export:-}",
"pid": "${PID:-}",
"thread": "%thread",
"class": "%logger{40}",
"rest": "%message"
}
</pattern>
</pattern>
<stackTrace>
<throwableConverter class="net.logstash.logback.stacktrace.ShortenedThrowableConverter">
<maxDepthPerThrowable>30</maxDepthPerThrowable>
<maxLength>2048</maxLength>
<shortenedClassNameLength>20</shortenedClassNameLength>
<exclude>^sun\.reflect\..*\.invoke</exclude>
<exclude>^net\.sf\.cglib\.proxy\.MethodProxy\.invoke</exclude>
<rootCauseFirst>true</rootCauseFirst>
</throwableConverter>
</stackTrace>
</providers>
</encoder>
</appender>
?
<root level="INFO">
<appender-ref ref="console"/>
<!-- uncomment this to have also JSON logs -->
<appender-ref ref="logstash"/>
<appender-ref ref="flatfile"/>
</root>
</configuration>
filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /root/ogter/build/*.json.*
- /root/ogter/log/*.json
exclude_files: ['.gz$']
json.keys_under_root: true
json.overwrite_keys: true
exclude_files: ['.gz$']
output.elasticsearch:
hosts: ["192.168.1.17:9200"]