当前位置:网站首页>Filebeat collects log data and transfers it to redis. Different es indexes are created based on log fields through logstash
Filebeat collects log data and transfers it to redis. Different es indexes are created based on log fields through logstash
2022-06-22 18:13:00 【Non famous operation and maintenance】
1.Filebeat.yml To configure
filebeat.inputs:
- type: log
enabled: true
paths:
- /usr/local/nginx/logs/access.log
exclude_files: ['.gz$','INFO']
multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
multiline.negate: true
multiline.match: after
tags: ["nginx-log-messages"]
fields:
log_source: messages
fields_under_root: true
output.redis:
hosts: ["192.168.0.111:6379"]
key: nginx_log
password: nginxredis
db: 0
Parameter description
fields:
log_source: messages
fields_under_root: true
Use fields It means that filebeat Add one more field to the collected log log_source, Its value is messages, Used in logstash Of output Output to elasticsearch Determine the source of the log , So as to establish the corresponding index if fields_under_root Set to true, Indicates that the newly added field above is a top-level parameter .
Top level fields in output Output to elasticsearch Use the following in :
[[email protected] logstash]# vim config/logstash.conf
input {
redis {
data_type => "list"
host => "192.168.0.111"
db => "0"
port => "6379"
key => "nginx_log"
password => "nginxredis"
}
}
output {
# according to redis key messages_secure In the corresponding list value , One of the parameters of each row of data to determine the log source
if [log_source] == 'messages' { # Pay attention to the writing of judgment conditions
elasticsearch {
hosts => ["192.168.0.111:9200"]
index => "nginx-message-%{+YYYY.MM.dd}"
#user => "elastic"
#password => "elastic123"
}
}
# Or it can be based on tags Judge
if "nginx-log-messages" in [tags] {
elasticsearch {
hosts => [""192.168.0.111:9200"]
index => "nginx-message-%{+YYYY.MM.dd}"
}
}
}
2. Logs of multiple applications are output to redis
filebeat.inputs:
- type: log
enabled: true
paths:
- /usr/local/nginx/logs/access.log
tags: ["nginx-log-access"]
fields:
log_source: access
fields_under_root: true
- type: log
enabled: true
paths:
- /usr/local/nginx/logs/error.log
tags: ["nginx-log-error"]
fields:
log_source: error
fields_under_root: true
output.redis:
hosts: ["192.168.0.111:6379"]
key: nginx_log
password: nginxredis
db: 0
stay redis The effect shown in is that it will be output to key value nginx_log In the corresponding list , according to key Values are indistinguishable , Only according to the key In each row of data in the value list log_source Or use the attributes defined by yourself to determine which application log this line is .
3. Different application logs use different rediskey value
Use output.redis Medium keys value , The official example
output.redis:
hosts: ["localhost"]
key: "default_list"
keys:
- key: "error_list" # send to info_list if `message` field contains INFO
when.contains:
message: "error"
- key: "debug_list" # send to debug_list if `message` field contains DEBUG
when.contains:
message: "DEBUG"
- key: "%{[fields.list]}"
explain : default key The value is default_list,keys The value of is created by dynamic allocation , When redis In the received log message The value of the field contains error Field , Create key by error_list, When the package contains DEBUG Field , Create key by debug_list.
The solution to the problem is to add a value that can distinguish the log in the output log of each application , And then in keys Set in , In this way, the logs of different applications can be output to different redis Of key in .
边栏推荐
- 轻松上手Fluentd,结合 Rainbond 插件市场,日志收集更快捷
- SaaS化应用开发指南
- 视频直播系统源码,顶部标题栏的隐藏和标题修改
- Heartless sword in Chinese
- Short video live broadcast source code, use of EditText input box
- 【人脸识别】基于GoogleNet深度学习网络的人脸识别matlab仿真
- [mysql] data synchronization prompt: specified key was too long; max key length is 767 bytes
- AD20/Altium designer——过孔盖油
- [psychology] emotional psychology - collision between contemporary thoughts and traditional thoughts (this article will be continuously updated from time to time)
- STM32系列(HAL库)——F103C8T6硬件SPI点亮带字库OLED屏
猜你喜欢
![azkaban启动报错 2022/06/20 21:39:27.726 +0800 ERROR [StdOutErrRedirect] [Azkaban] Exception in thread “m](/img/02/2e402f05022b36dc48ff47232e8535.png)
azkaban启动报错 2022/06/20 21:39:27.726 +0800 ERROR [StdOutErrRedirect] [Azkaban] Exception in thread “m

Ad20/altium Designer - oil for manhole cover

Pytorch——报错解决:“torch/optim/adamw.py” beta1, UnboundLocalError: local variable ‘beta1‘

基于转换器 (MMC) 技术和电压源转换器 (VSC) 的高压直流 (HVDC) 模型(Matlab&Simulink实现)

Five practical tips for power Bi (complimentary books at the end of the article)

Cloud minimalist deployment svelte3 chat room

Quartus prime 18.0 software installation package and installation tutorial

推荐7款超级好用的终端工具 —— SSH+FTP
![[applet project development -- Jingdong Mall] configuration tabbar & window style for uni app development](/img/cd/bdf26a02a43c63f374861e8431787c.png)
[applet project development -- Jingdong Mall] configuration tabbar & window style for uni app development

数据库行业分析:从全球IT产业趋势到国产数据库发展之路
随机推荐
Filebeat收集日志数据传输到Redis,通过Logstash来根据日志字段创建不同的ES索引
Application description of DAP fact table processing summary function
Principle of synchronized implementation
Traitement des valeurs manquantes
Is the CSC securities account given by qiniu school true? Is it safe to open an account
Arrays Aslist uses bug
Nuxt - 超详细环境搭建及创建项目整体流程(create-nuxt-app)
Activity start process sorting
[face recognition] matlab simulation of face recognition based on googlenet deep learning network
内容推荐流程
MYSQL_ ERRNO : 1205 MESSAGE :Lock wait timeout exceeded; try restarting transacti
##Kibana+ELK集群日志处理
短视频带货源码,保存图片到相册/图库
TypeScript(7)泛型
client-go gin的简单整合十-Update
各位大佬,第一次使用flink mysql cdc, 现在程序启动 没报错 新增数据没有打印出来
It may be the most comprehensive Matplotlib visualization tutorial in the whole network
clickhouse 21. X cluster four piece one copy deployment
AD20/Altium designer——过孔盖油
Nuxt - create nuxt app