当前位置:网站首页>[Flink] cdh/cdp Flink on Yan log configuration
[Flink] cdh/cdp Flink on Yan log configuration
2022-07-06 11:31:00 【kiraraLou】
Preface
because flink Applications are mostly long-running jobs , therefore jobmanager.log and taskmanager.log The size of the file can easily grow to several GB, This may be in your view flink Dashboard There is a problem with the content on . This article sorts out how to flink Enable jobmanager.log and taskmanager.log Rolling logging .
The article here is in CDH/CDP Configuration in environment , And this article applies to Flink Clusters and Flink on YARN.
To configure log4j
Flink The default log used is Log4j, The configuration file is as follows :
log4j-cli.properties: from Flink The command line client uses ( for example flink run)log4j-yarn-session.properties: from Flink Command line start YARN Session(yarn-session.sh) When usinglog4j.properties: JobManager / Taskmanagerjournal ( Include standalone and YARN)
Question why
By default ,CSA flink log4j.properties No rolling file add-on is configured .
How to configure
1. modify flink-conf/log4j.properties Parameters
Get into Cloudera Manager -> Flink -> Configuration -> Flink Client Advanced Configuration Snippet (Safety Valve) for flink-conf/log4j.properties
2. Insert the following :
monitorInterval=30
# This affects logging for both user code and Flink
rootLogger.level = INFO
rootLogger.appenderRef.file.ref = MainAppender
# Uncomment this if you want to _only_ change Flink's logging
#logger.flink.name = org.apache.flink
#logger.flink.level = INFO
# The following lines keep the log level of common libraries/connectors on
# log level INFO. The root logger does not override this. You have to manually
# change the log levels here.
logger.akka.name = akka
logger.akka.level = INFO
logger.kafka.name= org.apache.kafka
logger.kafka.level = INFO
logger.hadoop.name = org.apache.hadoop
logger.hadoop.level = INFO
logger.zookeeper.name = org.apache.zookeeper
logger.zookeeper.level = INFO
logger.shaded_zookeeper.name = org.apache.flink.shaded.zookeeper3
logger.shaded_zookeeper.level = INFO
# Log all infos in the given file
appender.main.name = MainAppender
appender.main.type = RollingFile
appender.main.append = true
appender.main.fileName = ${sys:log.file}
appender.main.filePattern = ${sys:log.file}.%i
appender.main.layout.type = PatternLayout
appender.main.layout.pattern = %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n
appender.main.policies.type = Policies
appender.main.policies.size.type = SizeBasedTriggeringPolicy
appender.main.policies.size.size = 100MB
appender.main.policies.startup.type = OnStartupTriggeringPolicy
appender.main.strategy.type = DefaultRolloverStrategy
appender.main.strategy.max = ${env:MAX_LOG_FILE_NUMBER:-10}
# Suppress the irrelevant (wrong) warnings from the Netty channel handler
logger.netty.name = org.jboss.netty.channel.DefaultChannelPipeline
logger.netty.level = OFF
3. Deploy client configuration
from CM -> Flink -> Actions -> Deploy client Save the configuration and redeploy flink Client configuration .
matters needing attention
Be careful 1: Each of the above settings 100 MB Scroll once jobmanager.log and taskmanager.log, And keep the old log file 7 God , Or when the total size exceeds 5000MB Delete the oldest log file .
Be careful 2: above log4j.properties Don't control jobmanager.err/out and taskmanaer.err/out, If your application explicitly prints any results to stdout/stderr, You may fill the file system after running for a long time . We suggest you use log4j Logging framework to record any messages , Or print any results .
Be careful 3: Although this article is in CDP Changes in the environment , But I found one bug, Namely CDP The environment has default configuration matters , So even if we modify , There will be conflicts. , So the final modification method is direct modification
/etc/flink/conf/log4j.propertiesfile .
IDEA
By the way Flink Local idea Running log configuration .
pom.xml
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.7.25</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-slf4j-impl</artifactId>
<version>2.9.1</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-api</artifactId>
<version>2.9.1</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-core</artifactId>
<version>2.9.1</version>
</dependency>
resource
stay resource Edit under directory log4j2.xml file
<?xml version="1.0" encoding="UTF-8"?>
<configuration monitorInterval="5">
<Properties>
<property name="LOG_PATTERN" value="%date{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n" />
<property name="LOG_LEVEL" value="INFO" />
</Properties>
<appenders>
<console name="Console" target="SYSTEM_OUT">
<PatternLayout pattern="${LOG_PATTERN}"/>
<ThresholdFilter level="${LOG_LEVEL}" onMatch="ACCEPT" onMismatch="DENY"/>
</console>
</appenders>
<loggers>
<root level="${LOG_LEVEL}">
<appender-ref ref="Console"/>
</root>
</loggers>
</configuration>
damo
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class Main {
// establish Logger object
private static final Logger log = LoggerFactory.getLogger(Main.class);
public static void main(String[] args) throws Exception {
// Print log
log.info("-----------------> start");
}
}
Reference resources
- https://github.com/apache/flink/blob/master/flink-dist/src/main/flink-bin/conf/log4j.properties
- https://my.cloudera.com/knowledge/How-to-configure-CSA-flink-to-rotate-and-archive-the?id=333860
- https://nightlies.apache.org/flink/flink-docs-master/zh/docs/deployment/advanced/logging/
- https://cs101.blog/2018/01/03/logging-configuration-in-flink/
边栏推荐
- Project practice - background employee information management (add, delete, modify, check, login and exit)
- Request object and response object analysis
- 2020网鼎杯_朱雀组_Web_nmap
- One click extraction of tables in PDF
- QT creator test
- Use dapr to shorten software development cycle and improve production efficiency
- [BSidesCF_2020]Had_a_bad_day
- Windows下安装MongDB教程、Redis教程
- [Blue Bridge Cup 2017 preliminary] buns make up
- 图像识别问题 — pytesseract.TesseractNotFoundError: tesseract is not installed or it‘s not in your path
猜你喜欢
随机推荐
nodejs 详解
Test objects involved in safety test
Picture coloring project - deoldify
Introduction to the easy copy module
DICOM: Overview
TypeScript
MySQL and C language connection (vs2019 version)
小L的试卷
引入了junit为什么还是用不了@Test注解
[Bluebridge cup 2020 preliminary] horizontal segmentation
How to set up voice recognition on the computer with shortcut keys
SQL时间注入
QT creator uses Valgrind code analysis tool
QT creator specifies dependencies
Library function -- (continuous update)
MySQL与c语言连接(vs2019版)
使用lambda在循环中传参时,参数总为同一个值
{一周总结}带你走进js知识的海洋
Punctual atom stm32f103zet6 download serial port pin
Record a problem of raspberry pie DNS resolution failure






![[蓝桥杯2017初赛]方格分割](/img/e9/e49556d0867840148a60ff4906f78e.png)


