当前位置:网站首页>[Flink] cdh/cdp Flink on Yan log configuration
[Flink] cdh/cdp Flink on Yan log configuration
2022-07-06 11:31:00 【kiraraLou】
Preface
because flink Applications are mostly long-running jobs , therefore jobmanager.log and taskmanager.log The size of the file can easily grow to several GB, This may be in your view flink Dashboard There is a problem with the content on . This article sorts out how to flink Enable jobmanager.log and taskmanager.log Rolling logging .
The article here is in CDH/CDP Configuration in environment , And this article applies to Flink Clusters and Flink on YARN.
To configure log4j
Flink The default log used is Log4j, The configuration file is as follows :
log4j-cli.properties: from Flink The command line client uses ( for example flink run)log4j-yarn-session.properties: from Flink Command line start YARN Session(yarn-session.sh) When usinglog4j.properties: JobManager / Taskmanagerjournal ( Include standalone and YARN)
Question why
By default ,CSA flink log4j.properties No rolling file add-on is configured .
How to configure
1. modify flink-conf/log4j.properties Parameters
Get into Cloudera Manager -> Flink -> Configuration -> Flink Client Advanced Configuration Snippet (Safety Valve) for flink-conf/log4j.properties
2. Insert the following :
monitorInterval=30
# This affects logging for both user code and Flink
rootLogger.level = INFO
rootLogger.appenderRef.file.ref = MainAppender
# Uncomment this if you want to _only_ change Flink's logging
#logger.flink.name = org.apache.flink
#logger.flink.level = INFO
# The following lines keep the log level of common libraries/connectors on
# log level INFO. The root logger does not override this. You have to manually
# change the log levels here.
logger.akka.name = akka
logger.akka.level = INFO
logger.kafka.name= org.apache.kafka
logger.kafka.level = INFO
logger.hadoop.name = org.apache.hadoop
logger.hadoop.level = INFO
logger.zookeeper.name = org.apache.zookeeper
logger.zookeeper.level = INFO
logger.shaded_zookeeper.name = org.apache.flink.shaded.zookeeper3
logger.shaded_zookeeper.level = INFO
# Log all infos in the given file
appender.main.name = MainAppender
appender.main.type = RollingFile
appender.main.append = true
appender.main.fileName = ${sys:log.file}
appender.main.filePattern = ${sys:log.file}.%i
appender.main.layout.type = PatternLayout
appender.main.layout.pattern = %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n
appender.main.policies.type = Policies
appender.main.policies.size.type = SizeBasedTriggeringPolicy
appender.main.policies.size.size = 100MB
appender.main.policies.startup.type = OnStartupTriggeringPolicy
appender.main.strategy.type = DefaultRolloverStrategy
appender.main.strategy.max = ${env:MAX_LOG_FILE_NUMBER:-10}
# Suppress the irrelevant (wrong) warnings from the Netty channel handler
logger.netty.name = org.jboss.netty.channel.DefaultChannelPipeline
logger.netty.level = OFF
3. Deploy client configuration
from CM -> Flink -> Actions -> Deploy client Save the configuration and redeploy flink Client configuration .
matters needing attention
Be careful 1: Each of the above settings 100 MB Scroll once jobmanager.log and taskmanager.log, And keep the old log file 7 God , Or when the total size exceeds 5000MB Delete the oldest log file .
Be careful 2: above log4j.properties Don't control jobmanager.err/out and taskmanaer.err/out, If your application explicitly prints any results to stdout/stderr, You may fill the file system after running for a long time . We suggest you use log4j Logging framework to record any messages , Or print any results .
Be careful 3: Although this article is in CDP Changes in the environment , But I found one bug, Namely CDP The environment has default configuration matters , So even if we modify , There will be conflicts. , So the final modification method is direct modification
/etc/flink/conf/log4j.propertiesfile .
IDEA
By the way Flink Local idea Running log configuration .
pom.xml
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.7.25</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-slf4j-impl</artifactId>
<version>2.9.1</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-api</artifactId>
<version>2.9.1</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-core</artifactId>
<version>2.9.1</version>
</dependency>
resource
stay resource Edit under directory log4j2.xml file
<?xml version="1.0" encoding="UTF-8"?>
<configuration monitorInterval="5">
<Properties>
<property name="LOG_PATTERN" value="%date{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n" />
<property name="LOG_LEVEL" value="INFO" />
</Properties>
<appenders>
<console name="Console" target="SYSTEM_OUT">
<PatternLayout pattern="${LOG_PATTERN}"/>
<ThresholdFilter level="${LOG_LEVEL}" onMatch="ACCEPT" onMismatch="DENY"/>
</console>
</appenders>
<loggers>
<root level="${LOG_LEVEL}">
<appender-ref ref="Console"/>
</root>
</loggers>
</configuration>
damo
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class Main {
// establish Logger object
private static final Logger log = LoggerFactory.getLogger(Main.class);
public static void main(String[] args) throws Exception {
// Print log
log.info("-----------------> start");
}
}
Reference resources
- https://github.com/apache/flink/blob/master/flink-dist/src/main/flink-bin/conf/log4j.properties
- https://my.cloudera.com/knowledge/How-to-configure-CSA-flink-to-rotate-and-archive-the?id=333860
- https://nightlies.apache.org/flink/flink-docs-master/zh/docs/deployment/advanced/logging/
- https://cs101.blog/2018/01/03/logging-configuration-in-flink/
边栏推荐
- Introduction to the easy copy module
- Vs2019 first MFC Application
- Solve the problem of installing failed building wheel for pilot
- Software I2C based on Hal Library
- In the era of DFI dividends, can TGP become a new benchmark for future DFI?
- MySQL and C language connection (vs2019 version)
- Software testing and quality learning notes 3 -- white box testing
- 【CDH】CDH5.16 配置 yarn 任务集中分配设置不生效问题
- Picture coloring project - deoldify
- 【flink】flink学习
猜你喜欢

打开浏览器的同时会在主页外同时打开芒果TV,抖音等网站

2019腾讯暑期实习生正式笔试

软件测试与质量学习笔记3--白盒测试

Software testing and quality learning notes 3 -- white box testing

How to build a new project for keil5mdk (with super detailed drawings)

Why can't I use the @test annotation after introducing JUnit

Neo4j installation tutorial

Software I2C based on Hal Library

{one week summary} take you into the ocean of JS knowledge

Learning question 1:127.0.0.1 refused our visit
随机推荐
DICOM: Overview
AI benchmark V5 ranking
Use dapr to shorten software development cycle and improve production efficiency
AcWing 1298.曹冲养猪 题解
Codeforces Round #771 (Div. 2)
數據庫高級學習筆記--SQL語句
Base de données Advanced Learning Notes - - SQL statements
Learning question 1:127.0.0.1 refused our visit
引入了junit为什么还是用不了@Test注解
快来走进JVM吧
AcWing 179.阶乘分解 题解
yarn安装与使用
Machine learning -- census data analysis
QT creator runs the Valgrind tool on external applications
Database advanced learning notes -- SQL statement
[Bluebridge cup 2020 preliminary] horizontal segmentation
C语言读取BMP文件
ES6 let 和 const 命令
Niuke novice monthly race 40
Reading BMP file with C language