当前位置:网站首页>Elk construction guide
Elk construction guide
2022-06-12 12:02:00 【Drunken fish!】
I haven't sent a message for a long time , Let's share our experience with an operation article on water today , Subsequently, in-depth good articles were launched one after another
springboot+logStash+elasticsearch+kibana
edition
- elasticsearch 7.4.2
- logStash 7.4.2
- springboot 2.1.10
Download address
Select the product and version to download , Download it
https://www.elastic.co/cn/downloads/past-releases
Deploy
start-up Elasticsearch
Setup profile elasticsearch
cluster.name: my-application node.name: node-1 path.data: /cxt/software/maces/7.4.2/elasticsearch-7.4.2/data path.logs: /cxt/software/maces/7.4.2/elasticsearch-7.4.2/logs network.host: 0.0.0.0 http.port: 9200 discovery.seed_hosts: ["127.0.0.1"] cluster.initial_master_nodes: ["node-1"]start-up
bin/elasticsearch
start-up Kibana
Go directly to bin Start in the directory , All are started locally without modifying the configuration
bin/kibana
start-up LogStash
config Create under folder springboot-log.conf, The function of this configuration file is to start on this computer 9600 port , Back springboot Applications can go directly to 9600 Send log .input Enter... For the log ,output Output to... For logs elasticsearch
input{ # Start at 9600 port , Output on the console tcp { mode => "server" host => "0.0.0.0" port => 9600 codec => json_lines } } output{ elasticsearch{ hosts=>["192.168.123.166:9200"] index => "springboot-logstash-%{ +YYYY.MM.dd}" } # stdout{ # codec => rubydebug # } }start-up
bin/logstash -f config/springboot-log.conf
start-up SpringBoot application
pom
<dependency> <groupId>net.logstash.logback</groupId> <artifactId>logstash-logback-encoder</artifactId> <version>7.0</version> </dependency>testController Method
@RestController public class TestController { public static final Logger log = LoggerFactory.getLogger(TestController.class); @RequestMapping("/test") public String test(){ log.info("this is a log from springboot"); log.trace("this is a trace log "); return "success"; } }Start class main Method to add automatic log generation code
@SpringBootApplication public class ElkApplication { public static final Logger log = LoggerFactory.getLogger(ElkApplication.class); Random random = new Random(10000); public static void main(String[] args) { SpringApplication.run(ElkApplication.class, args); new ElkApplication().initTask(); } private void initTask() { Executors.newSingleThreadScheduledExecutor().scheduleAtFixedRate(new Runnable() { @Override public void run() { log.info("seed info msg :" + random.nextInt(999999)); } }, 100, 100, TimeUnit.MILLISECONDS); } }resource newly build logback-spring.xml
<?xml version="1.0" encoding="UTF-8"?> <configuration> <include resource="org/springframework/boot/logging/logback/base.xml" /> <appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender"> <!-- To configure logStash Service address --> <destination>192.168.123.166:9600</destination> <!-- Log output code --> <encoder charset="UTF-8" class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder"> <providers> <timestamp> <timeZone>UTC</timeZone> </timestamp> <pattern> <pattern> { "logLevel": "%level", "serviceName": "${springAppName:-}", "pid": "${PID:-}", "thread": "%thread", "class": "%logger{40}", "detail": "%message" } </pattern> </pattern> </providers> </encoder> </appender> <root level="INFO"> <appender-ref ref="LOGSTASH" /> <appender-ref ref="CONSOLE" /> </root> </configuration>
verification
Start it in sequence and open it es-head The plug-in can view the index information , There is a message calling the interface , There is also an application startup message

Kibana Data presentation
Set index rules

Set timestamp matching after input
Display data , choice Discover

Join in filebeat
logstash newly build filebeat-logstash-log.conf
input{ beats { host => "192.168.123.166" port => 9600 } } output{ elasticsearch{ hosts=>["192.168.123.166:9200"] index => "%{ [@metadata][beat]}-%{ [@metadata][version]}-%{ +YYYY.MM.dd}" } }start-up
bin/logstash -f filebeat-logstash-log.confFilebeat Modify the configuration file , Find the place below to modify , It mainly monitors log files and outputs logstash Server address
filebeat.inputs: - type: log enabled: true paths: - /cxt/codework/java/springboot-demo/logs/springboot-elk/2022-06-04/info.2022-06-04.0.log setup.kibana: Host: "192.168.123.166:5601" #----------------------------- Logstash output -------------------------------- output.logstash: # The Logstash hosts hosts: ["192.168.123.166:9600"]springboot The application configures the location of the generated log file ,resource Under the new logback-spring-file.xml
<?xml version="1.0" encoding="UTF-8"?> <configuration debug="false" scan="false"> <!-- Log file path --> <property name="log.path" value="logs/springboot-elk"/> <!-- Console log output --> <appender name="console" class="ch.qos.logback.core.ConsoleAppender"> <encoder> <pattern>%d{MM-dd HH:mm:ss.SSS} %-5level [%logger{50}] - %msg%n </pattern> </encoder> </appender> <!-- Log file debug output --> <appender name="fileRolling_info" class="ch.qos.logback.core.rolling.RollingFileAppender"> <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"> <fileNamePattern>${log.path}/%d{yyyy-MM-dd}/info.%d{yyyy-MM-dd}.%i.log</fileNamePattern> <TimeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP"> <maxFileSize>50MB</maxFileSize> </TimeBasedFileNamingAndTriggeringPolicy> </rollingPolicy> <encoder> <pattern>%date [%thread] %-5level [%logger{50}] %file:%line - %msg%n </pattern> </encoder> <!--<filter class="ch.qos.logback.classic.filter.LevelFilter"> <level>ERROR</level> <onMatch>DENY</onMatch> <onMismatch>NEUTRAL</onMismatch> </filter> --> </appender> <!-- Log file error output --> <appender name="fileRolling_error" class="ch.qos.logback.core.rolling.RollingFileAppender"> <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"> <fileNamePattern>${log.path}/%d{yyyy-MM-dd}/error.%d{yyyy-MM-dd}.%i.log</fileNamePattern> <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP"> <maxFileSize>50MB</maxFileSize> </timeBasedFileNamingAndTriggeringPolicy> </rollingPolicy> <encoder> <pattern>%date [%thread] %-5level [%logger{50}] %file:%line - %msg%n </pattern> </encoder> <filter class="ch.qos.logback.classic.filter.ThresholdFilter"> <level>ERROR</level> </filter> </appender> <!-- Level: FATAL 0 ERROR 3 WARN 4 INFO 6 DEBUG 7 --> <root level="info"> <!--{dev.start}--> <appender-ref ref="console"/> <!--{dev.end}--> <!--{alpha.start} <appender-ref ref="fileRolling_info" /> {alpha.end}--> <!-- {release.start}--> <appender-ref ref="fileRolling_info"/> <!-- {release.end}--> <appender-ref ref="fileRolling_error"/> </root> <!-- Framework level setting --> <!-- <include resource="config/logger-core.xml" />--> <!-- Project level setting --> <!-- <logger name="your.package" level="DEBUG" /> --> <logger name="org.springframework" level="INFO"></logger> <logger name="org.mybatis" level="INFO"></logger> </configuration>application.yml The document specifies logback-spring-file.xml
logging: # Default logback-spring.xml Use logstash Transferred to the es; # Change it to logback-spring-file.xml Transfer logs to archive log files , Use filebeat Monitor logs config: classpath:logback-spring-file.xmlThe process is over , Now the process is
Okay , That's all ELK Setup process , Incidental use filebeat Those who listen to log files have also done , It's roughly this dark purple , It's been a long time since I had a liver problem , More in-depth theoretical articles will be published later , Welcome to WeChat official account. :《 Drunken fish JAVA》 Learn together
边栏推荐
- 树的前序,中序,后序遍历
- LeetCode 1037. Effective boomerang (vector cross product)
- ARM指令集之杂类指令
- Design of TTable
- 机器学习基础概念
- 【QNX Hypervisor 2.2 用户手册】4.1 构建QNX Hypervisor系统的方法
- JS to load and display Excel files
- bind、call、apply三者的区别,还有bind()的封装
- LeetCode 497. 非重叠矩形中的随机点(前缀和+二分)
- Getting started with NVIDIA Jetson nano Developer Kit
猜你喜欢

Blue Bridge Cup 2015 CA provincial competition (filling the pit)

LeetCode 497. 非重叠矩形中的随机点(前缀和+二分)

【深度学习基础】反向传播法(1)

6.6 rl:mdp and reward function

PDSCH related

LeetCode 497. Random points in non overlapping rectangles (prefix and + bisection)

PDSCH 相关

7-5 complex quaternion operation

Design of virtual scrolling list

QT添加QObject类(想使用信号和槽)遇到的问题汇总,亲测解决有效error: undefined reference to `vtable for xxxxxx(你的类名)‘
随机推荐
Why is there no traffic after the launch of new products? How should new products be released?
TinyMCE realizes automatic uploading of pasted pictures
作物模型的区域模拟实现过程初探
ioremap
視頻分類的類間和類內關系——正則化
conda环境下pip install 无法安装到指定conda环境中(conda环境的默认pip安装位置)
树的前序,中序,后序遍历
ARM指令集之Load/Store访存指令(二)
邻居子系统之ARP协议数据处理过程
Must do skill -- use ffmpeg command to quickly and accurately cut video
Design of secure chat tool based on C #
LeetCode_二分搜索_中等_162. 寻找峰值
Ficusjs series (I) introduction to ficusjs
Find the median of two ordered arrays (leetcode 4)
[foundation of deep learning] back propagation method (1)
Linear model of machine learning
6.6 rl:mdp and reward function
Chapter VI data type (V)
ARM指令集之Load/Store指令寻址方式(一)
How to select standard products and non-standard products, the importance of selection, and how to layout the store