Premise

At present (2022-02 Before and after ) Log framework logback Latest version 1.3.0 Updated to 1.3.0-alpha14 edition , This version is not stable edition , Compared with the latest stable version 1.2.10 Come on , although slf4j-api The version has been upgraded , But the use of API Basically unchanged , about XML Configuration provides import Labels for many appender It can simplify the configuration . Given the latest version of the software, OCD , Here is based on 1.3.0-alpha14 Version analysis of commonly used logback Configuration items and some practical experience .

The log level

See Level class :

Serial number The level of logging value remarks
1OFFInteger.MAX_VALUE Turn off log printing
2TRACE5000-
3DEBUG10000-
4INFO20000-
5WARN30000-
6ERROR40000-
7ALLInteger.MIN_VALUE Print all logs

The higher the value of log level , The higher the level , From low to high ( Left to right ) Arranged as follows :

TRACE < DEBUG < INFO < WARN < ERROR

The log level is generally used as the filter condition or query condition of log events , In some specific components , You can decide to discard low-level log events or ignore specified level log events through configuration items .

Depend on the introduction of

Because of the current 1.3.0-alpha14 Too much version " new ", Most mainstream frameworks have not been integrated , If you want to taste fresh, you'd better pass BOM Globally specify the corresponding dependent version :

<!-- BOM -->
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>2.0.0-alpha6</version>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>1.3.0-alpha14</version>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-core</artifactId>
<version>1.3.0-alpha14</version>
</dependency>
</dependencies>
</dependencyManagement> <!-- Dependency collection -->
<dependencies>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-core</artifactId>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
</dependency>
</dependencies>

logback.xml Basic configuration example

1.2.x and 1.3.x Provided API Basically no change , also 1.3.x Forward compatible with the old configuration , Provides import Labels are used to simplify class The specified :

  • 1.2.x( Old configuration ) The configuration method before :
<?xml version="1.0" encoding="UTF-8"?>
<configuration debug="false"> <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<pattern>[%date{ISO8601}] [%level] %logger{80} [%thread] [%X{TRACE_ID}] - %msg%n</pattern>
</encoder>
</appender> <root level="DEBUG" additivity="false">
<appender-ref ref="STDOUT"/>
</root>
</configuration>
  • 1.3.x New configurations available :
<?xml version="1.0" encoding="UTF-8"?>
<configuration debug="false"> <import class="ch.qos.logback.core.ConsoleAppender"/>
<import class="ch.qos.logback.classic.encoder.PatternLayoutEncoder"/> <appender name="STDOUT" class="ConsoleAppender">
<encoder class="PatternLayoutEncoder">
<pattern>[%date{ISO8601}] [%level] %logger{80} [%thread] [%X{TRACE_ID}] - %msg%n</pattern>
</encoder>
</appender> <root level="DEBUG" additivity="false">
<appender-ref ref="STDOUT"/>
</root>
</configuration>

For a single Appender Configuration view ,import The introduction of tags does not seem to simplify the configuration , But for many Appender The configuration can be relatively simplified class The specified , for example :

<?xml version="1.0" encoding="UTF-8"?>
<configuration debug="false">
<property name="app" value="api-gateway"/>
<property name="filename" value="server"/> <import class="ch.qos.logback.classic.encoder.PatternLayoutEncoder"/>
<import class="ch.qos.logback.core.rolling.RollingFileAppender"/>
<import class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"/>
<import class="ch.qos.logback.core.ConsoleAppender"/>
<import class="ch.qos.logback.classic.AsyncAppender"/>
<import class="ch.qos.logback.classic.filter.ThresholdFilter"/>
<import class="cn.vlts.logback.IncludeLevelSetFilter"/> <appender name="INFO" class="RollingFileAppender">
<file>/data/log-center/${app}/${filename}.log</file>
<rollingPolicy class="TimeBasedRollingPolicy">
<fileNamePattern>/data/log-center/${app}/${filename}.%d{yyyy-MM-dd}.log</fileNamePattern>
<maxHistory>14</maxHistory>
</rollingPolicy>
<encoder class="PatternLayoutEncoder">
<pattern>[%date{ISO8601}] [%level] %logger{80} [%thread] [%X{TRACE_ID}] ${app} - %msg%n</pattern>
</encoder>
<filter class="IncludeLevelSetFilter">
<levels>INFO,WARN</levels>
<onMatch>ACCEPT</onMatch>
<onMismatch>DENY</onMismatch>
</filter>
</appender> <appender name="ERROR" class="RollingFileAppender">
<file>/data/log-center/${app}/${filename}-error.log</file>
<rollingPolicy class="TimeBasedRollingPolicy">
<fileNamePattern>/data/log-center/${app}/${filename}-error.%d{yyyy-MM-dd}.log</fileNamePattern>
<maxHistory>14</maxHistory>
</rollingPolicy>
<encoder class="PatternLayoutEncoder">
<pattern>[%date{ISO8601}] [%level] %logger{80} [%thread] [%X{TRACE_ID}] ${app} - %msg%n</pattern>
</encoder>
<filter class="ThresholdFilter">
<level>ERROR</level>
</filter>
</appender> <appender name="STDOUT" class="ConsoleAppender">
<encoder class="PatternLayoutEncoder">
<pattern>[%date{ISO8601}] [%level] %logger{80} [%thread] [%X{TRACE_ID}] - %msg%n</pattern>
</encoder>
<filter class="ThresholdFilter">
<level>DEBUG</level>
</filter>
</appender> <appender name="ASYNC_INFO" class="AsyncAppender">
<queueSize>1024</queueSize>
<discardingThreshold>0</discardingThreshold>
<appender-ref ref="INFO"/>
</appender> <appender name="ASYNC_ERROR" class="AsyncAppender">
<queueSize>256</queueSize>
<discardingThreshold>0</discardingThreshold>
<appender-ref ref="ERROR"/>
</appender> <logger name="sun.rmi" level="error"/>
<logger name="sun.net" level="error"/>
<logger name="javax.management" level="error"/>
<logger name="org.redisson" level="warn"/>
<logger name="com.zaxxer" level="warn"/> <root level="DEBUG" additivity="false">
<appender-ref ref="STDOUT"/>
<appender-ref ref="ASYNC_INFO"/>
<appender-ref ref="ASYNC_ERROR"/>
</root>
</configuration>

The above configuration is a API The gateway logback.xml Configuration example , Here we use a custom Filter Realization IncludeLevelSetFilter

// cn.vlts.logback.IncludeLevelSetFilter
public class IncludeLevelSetFilter extends AbstractMatcherFilter<ILoggingEvent> { private String levels; private Set<Level> levelSet; @Override
public FilterReply decide(ILoggingEvent event) {
return levelSet.contains(event.getLevel()) ? onMatch : onMismatch;
} public void setLevels(String levels) {
this.levels = levels;
this.levelSet = Arrays.stream(levels.split(","))
.map(item -> Level.toLevel(item, Level.INFO)).collect(Collectors.toSet());
} @Override
public void start() {
if (Objects.nonNull(this.levels)) {
super.start();
}
}
}

IncludeLevelSetFilter Used to accept log records of a specified set of log levels , If there are more refined log filtering conditions ( Built in common LevelFilterThresholdFilter Etc. cannot meet the actual needs ), It can be realized by itself ch.qos.logback.core.filter.Filter Interface customizes log event filtering strategy . This document defines five appender, Among them is 2 For asynchronous enhancement , The core appender Yes 3 individual :

  • STDOUTConsoleAppender, Standard output synchronization log printing , The level of DEBUG Or more
  • ASYNC_INFOINFO):RollingFileAppender, Asynchronous rolling file append log printing , The level of INFO perhaps WARN, Append to file /data/log-center/api-gateway/server.log, The archive file format is /data/log-center/api-gateway/server-${yyyy-MM-dd}.log.${compression_suffix}, Archive files can be saved at most 14 Copies
  • ASYNC_ERRORERROR):RollingFileAppender, Asynchronous rolling file append log printing , The level of ERROR, Append to file /data/log-center/api-gateway/server-error.log, The archive file format is /data/log-center/api-gateway/server-error-${yyyy-MM-dd}.log.${compression_suffix}, Archive files can be saved at most 14 Copies

frequently-used Appender And its parameters

frequently-used Appender Yes :

  • ConsoleAppender
  • FileAppender
  • RollingFileAppender
  • AsyncAppender

among ,RollingFileAppender yes FileAppender An extension of ( Subclass ), In reality ConsoleAppender and RollingFileAppender Has a wider scope of application . From the perspective of class inheritance ,ConsoleAppender and FileAppender Both support definitions Encoder, Most commonly used Encoder The realization is PatternLayoutEncoder, Used to customize the final output format of log events . About Encoder, Because its parameter format is too flexible , Many parameters , Limited by space, this article will not introduce .

ConsoleAppender

ConsoleAppender Used to append logs to the console , about Java For applications, it is appended to System.out perhaps System.err.ConsoleAppender The supported parameters are as follows :

Parameters type The default value is describe
encoderch.qos.logback.core.encoder.EncoderPatternLayoutEncoder Used for definition Encoder
targetStringSystem.out Define the output target , Optional value System.out or System.err
withJansibooleanfalse Do you support Jansi, This is a support colorful ANSI Coded class library , Used to output color console fonts

ConsoleAppender An example of the use of :

<?xml version="1.0" encoding="UTF-8"?>
<configuration debug="false"> <import class="ch.qos.logback.classic.encoder.PatternLayoutEncoder"/>
<import class="ch.qos.logback.core.ConsoleAppender"/> <appender name="STDOUT" class="ConsoleAppender">
<encoder class="PatternLayoutEncoder">
<pattern>[%date{ISO8601}] [%level] %logger{80} [%thread] [%X{TRACE_ID}] - %msg%n</pattern>
</encoder>
</appender> <root level="DEBUG" additivity="false">
<appender-ref ref="STDOUT"/>
</root>
</configuration>

RollingFileAppender

RollingFileAppender yes FileAppender Subclasses of , It supports outputting logs to files , And support through rolling rules (RollingPolicy) Set up , You can install built-in or custom rules to split 、 archive log files .RollingFileAppender The supported parameters are as follows :

Parameters type The default value is describe
fileString- Used to define the target file for the current log output
appendbooleantrue Used to define whether the current log output is appended
rollingPolicy ch.qos.logback.core.rolling.RollingPolicy- Log file scrolling policy
triggeringPolicy ch.qos.logback.core.rolling.TriggeringPolicy- Log file scrolling timing trigger strategy
prudentbooleanfalse Do you support prudent Pattern ( When this mode is turned on FileLock Write log files under protection ),FileAppender Support this mode

frequently-used RollingPolicy The built-in implementation has :

  • TimeBasedRollingPolicy: The most commonly used log rolling strategy , Rolling segmentation and archiving based on date and time
Parameters type The default value is describe
fileNamePatternString- File name format , for example /var/log/app/server.%d{yyyy-MM-dd, UTC}.log.gz
maxHistoryint- Maximum number of archived files
totalSizeCapFileSize- The upper limit of the total size of all archived files
cleanHistoryOnStartbooleanfalse Marked as true be Appender Clean up at startup ( illegal ) archive log files
  • SizeAndTimeBasedRollingPolicy: Rolling segmentation and archiving based on log file size or date and time
Parameters type The default value is describe
fileNamePatternString- File name format , for example /var/log/app/server.%d{yyyy-MM-dd, UTC}.%i.log.gz
maxHistoryint- Maximum number of archived files
totalSizeCapFileSize- The upper limit of the total size of all archived files
cleanHistoryOnStartbooleanfalse Marked as true be Appender Clean up at startup ( illegal ) archive log files
  • FixedWindowRollingPolicy: Rolling segmentation and archiving based on log file size or date and time
Parameters type The default value is describe
fileNamePatternString- File name format , for example /var/log/app/server.%d{yyyy-MM-dd, UTC}.log.gz
minIndexint- Lower bound of window index
maxIndexint- Upper bound of window index

frequently-used TriggeringPolicy The built-in implementation has :

  • SizeBasedTriggeringPolicy: Trigger strategy based on file size
  • DefaultTimeBasedFileNamingAndTriggeringPolicylogback For internal use ): Trigger by judging the system date and time based on the date and time and file name

Here are some points worth noting :

  • TimeBasedRollingPolicy And it's done TriggeringPolicy Interface ( Entrusted to DefaultTimeBasedFileNamingAndTriggeringPolicy In the implementation of ), Provides a thorough log file rolling opportunity trigger strategy , So it's using TimeBasedRollingPolicy There is no need to specify specific triggeringPolicy example
  • SizeAndTimeBasedRollingPolicy Sub components are used SizeAndTimeBasedFNATP Realization , The old version generally uses SizeAndTimeBasedFNATP Realize the log rolling archiving function based on file size or date and time , This component is recommended in the new version SizeAndTimeBasedRollingPolicy replace
  • logback Will be based on parameters fileNamePattern Select the corresponding archive log file compression algorithm according to the file name suffix defined in , for example .zip Will choose ZIP Compression algorithm ,.gz Will choose GZIP Compression algorithm
  • SizeAndTimeBasedRollingPolicy and FixedWindowRollingPolicy Of fileNamePattern All parameters support %i Place holder , Used to define the index value of the archive , Actually, the index is 0
  • FixedWindowRollingPolicy and SizeBasedTriggeringPolicy Combined use can realize the function of log scrolling based on file size (TimeBasedRollingPolicy Benchmarking function )

AsyncAppender

AsyncAppender Used for asynchronous logging , Need to match with other types Appender Use , Intuitively, it's to put " asynchronous " Function gives others Appender example .AsyncAppender The supported parameters are as follows :

Parameters type The default value is describe
queueSizeint256 The maximum capacity of the blocking queue storing log events
discardingThresholdintqueueSize / 5 Log event discard threshold , The remaining capacity of the blocking queue is less than this threshold , Will discard except WARN and ERROR Log events at all other levels of level , This threshold is set to 0 It is equivalent to not discarding any log events
includeCallerDatabooleanfalse Whether the log event contains caller data , Set to true The calling thread information will be added 、MDC Data in, etc
maxFlushTimeint1000 The maximum waiting time for asynchronous log writing worker thread to exit , The unit is millisecond
neverBlockbooleanfalse Whether it will never block ( Calling thread of current application ), Set to true When the queue is full, the newly added log events will be directly discarded

Need to pass through <appender-ref> The tag is associated with an existing Appender Instance to a brand new AsyncAppender In the example , And one AsyncAppender Instances can be based on multiple <appender-ref> Add multiple tags Appender example , for example :

<?xml version="1.0" encoding="UTF-8"?>
<configuration debug="false"> <import class="ch.qos.logback.classic.encoder.PatternLayoutEncoder"/>
<import class="ch.qos.logback.core.ConsoleAppender"/>
<import class="ch.qos.logback.classic.AsyncAppender"/> <appender name="STDOUT" class="ConsoleAppender">
<encoder class="PatternLayoutEncoder">
<pattern>[%date{ISO8601}] [%level] %logger{80} [%thread] [%X{TRACE_ID}] - %msg%n</pattern>
</encoder>
</appender> <appender name="ASYNC_STDOUT" class="AsyncAppender">
<queueSize>1024</queueSize>
<discardingThreshold>0</discardingThreshold>
<appender-ref ref="STDOUT"/>
<!-- <appender-ref ref="OTHER_APPENDER"/> -->
</appender> <root level="DEBUG" additivity="false">
<appender-ref ref="ASYNC_STDOUT"/>
</root>
</configuration>

Specify the configuration file for initialization

logback Built in initialization strategy ( In order of priority ) as follows :

  • adopt ClassPath Medium logback-test.xml File initialization
  • adopt ClassPath Medium logback.xml File initialization
  • adopt SPI It's the way that ClassPath Medium META-INF\services\ch.qos.logback.classic.spi.Configurator To initialize
  • If the previous three steps are not configured , Through BasicConfigurator initialization , Provide the most basic log processing function

You can use the command line arguments logback.configurationFile Directly specify external logback The configuration file ( The suffix must be .xml perhaps .groovy), This initialization method ignores the built-in initialization strategy , for example :

java -Dlogback.configurationFile=/path/conf/config.xml app.jar

Or set system parameters ( Below Demo From official examples ):

public class ServerMain {
public static void main(String args[]) throws IOException, InterruptedException {
// must be set before the first call to LoggerFactory.getLogger();
// ContextInitializer.CONFIG_FILE_PROPERTY is set to "logback.configurationFile"
System.setProperty(ContextInitializer.CONFIG_FILE_PROPERTY, "/path/to/config.xml");
...
}
}

This method requires that there should be no static member variable calls LoggerFactory.getLogger() Method , Because it may lead to initialization using the built-in initialization strategy in advance .

Programmed initialization

For complete control logback The initialization , You can use pure programming to set ( The following programmatic configuration follows " Best practices " Write the configuration file in ):

import ch.qos.logback.classic.AsyncAppender;
import ch.qos.logback.classic.Level;
import ch.qos.logback.classic.Logger;
import ch.qos.logback.classic.LoggerContext;
import ch.qos.logback.classic.encoder.PatternLayoutEncoder;
import ch.qos.logback.classic.filter.ThresholdFilter;
import ch.qos.logback.classic.spi.ILoggingEvent;
import ch.qos.logback.core.ConsoleAppender;
import ch.qos.logback.core.rolling.RollingFileAppender;
import ch.qos.logback.core.rolling.TimeBasedRollingPolicy;
import org.slf4j.LoggerFactory; /**
* @author throwable
* @version v1
* @description
* @since 2022/2/13 13:09
*/
public class LogbackLauncher { public static void main(String[] args) throws Exception {
LoggerContext loggerContext = (LoggerContext) org.slf4j.LoggerFactory.getILoggerFactory();
loggerContext.reset();
Logger rootLogger = loggerContext.getLogger(Logger.ROOT_LOGGER_NAME);
// Remove all Appender
rootLogger.detachAndStopAllAppenders();
// RollingFileAppender
PatternLayoutEncoder fileEncoder = new PatternLayoutEncoder();
fileEncoder.setContext(loggerContext);
fileEncoder.setPattern("[%date{ISO8601}] [%level] %logger{80} [%thread] [%X{TRACE_ID}] ${app} - %msg%n");
fileEncoder.start();
RollingFileAppender<ILoggingEvent> fileAppender = new RollingFileAppender<>();
fileAppender.setContext(loggerContext);
fileAppender.setName("FILE");
fileAppender.setFile("/data/log-center/api-gateway/server.log");
fileAppender.setAppend(true);
fileAppender.setEncoder(fileEncoder);
ThresholdFilter fileFilter = new ThresholdFilter();
fileFilter.setLevel("INFO");
fileAppender.addFilter(fileFilter);
TimeBasedRollingPolicy<ILoggingEvent> rollingPolicy = new TimeBasedRollingPolicy<>();
rollingPolicy.setParent(fileAppender);
rollingPolicy.setContext(loggerContext);
rollingPolicy.setFileNamePattern("/data/log-center/api-gateway/server.%d{yyyy-MM-dd}.log.gz");
rollingPolicy.setMaxHistory(14);
rollingPolicy.start();
fileAppender.setRollingPolicy(rollingPolicy);
fileAppender.start();
AsyncAppender asyncAppender = new AsyncAppender();
asyncAppender.setName("ASYNC_FILE");
asyncAppender.setContext(loggerContext);
asyncAppender.setDiscardingThreshold(0);
asyncAppender.setQueueSize(1024);
asyncAppender.addAppender(fileAppender);
asyncAppender.start();
// ConsoleAppender
PatternLayoutEncoder consoleEncoder = new PatternLayoutEncoder();
consoleEncoder.setContext(loggerContext);
consoleEncoder.setPattern("[%date{ISO8601}] [%level] %logger{80} [%thread] [%X{TRACE_ID}] - %msg%n");
consoleEncoder.start();
ConsoleAppender<ILoggingEvent> consoleAppender = new ConsoleAppender<>();
consoleAppender.setContext(loggerContext);
consoleAppender.setEncoder(consoleEncoder);
ThresholdFilter consoleFilter = new ThresholdFilter();
consoleFilter.setLevel("DEBUG");
consoleAppender.addFilter(consoleFilter);
consoleAppender.start();
rootLogger.setLevel(Level.DEBUG);
rootLogger.setAdditive(false);
rootLogger.addAppender(consoleAppender);
rootLogger.addAppender(asyncAppender); org.slf4j.Logger logger = LoggerFactory.getLogger(LogbackDemo1.class);
logger.debug("debug nano => {}", System.nanoTime());
logger.info("info nano => {}", System.nanoTime());
logger.warn("warn nano => {}", System.nanoTime());
logger.error("error nano => {}", System.nanoTime());
}
}

Best practices

It is recommended to use logback The most commonly used mentioned in the document :RollingFileAppender + TimeBasedRollingPolicy + ConsoleAppender( This is to facilitate local development and debugging ) Combine . Generally speaking , The log file will eventually pass Filebeat Wait for the log collection component to upload to ELK system , After reasonably defining the output format of the log ( For example, specify Level Parameters ) On the premise of , In fact, you can output all log files without splitting them at different levels INFO Or logs of higher levels , In the end in Kibana Parameters can also be easily passed in level: ${LEVEL} Perform different levels of log queries . And for High performance requirements Services such as API gateway , Make a proposal to RollingFileAppender Related to AsyncAppender In the example , If the memory is enough, adjust it to large queueSize Parameters and set discardingThreshold = 0( Do not discard log events when the queue is full , It is possible to block the calling thread , I can't bear to extend the asynchronous logging function by myself ). On the premise of sufficient server disks , Generally, there is no upper limit for the file size of archived logs , Only set the maximum number of archived files , The recommended quantity is 14 ~ 30( That is to say 2 Thoughtful 1 Between months ). Here is a template :

<?xml version="1.0" encoding="UTF-8"?>
<configuration debug="false">
<property name="app" value=" Application name , for example api-gateway"/>
<property name="filename" value=" File name prefix , for example server"/> <import class="ch.qos.logback.classic.encoder.PatternLayoutEncoder"/>
<import class="ch.qos.logback.core.rolling.RollingFileAppender"/>
<import class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"/>
<import class="ch.qos.logback.core.ConsoleAppender"/>
<import class="ch.qos.logback.classic.AsyncAppender"/>
<import class="ch.qos.logback.classic.filter.ThresholdFilter"/> <appender name="FILE" class="RollingFileAppender">
<file>/data/log-center/${app}/${filename}.log</file>
<rollingPolicy class="TimeBasedRollingPolicy">
<fileNamePattern>/data/log-center/${app}/${filename}.%d{yyyy-MM-dd}.log</fileNamePattern>
<maxHistory>14</maxHistory>
</rollingPolicy>
<encoder class="PatternLayoutEncoder">
<pattern>[%date{ISO8601}] [%level] %logger{80} [%thread] [%X{TRACE_ID}] ${app} - %msg%n</pattern>
</encoder>
<filter class="ThresholdFilter">
<level>INFO</level>
</filter>
</appender> <appender name="STDOUT" class="ConsoleAppender">
<encoder class="PatternLayoutEncoder">
<pattern>[%date{ISO8601}] [%level] %logger{80} [%thread] [%X{TRACE_ID}] - %msg%n</pattern>
</encoder>
<filter class="ThresholdFilter">
<level>DEBUG</level>
</filter>
</appender> <appender name="ASYNC_FILE" class="AsyncAppender">
<queueSize>1024</queueSize>
<!-- Do not discard any log events when the queue is full -->
<discardingThreshold>0</discardingThreshold>
<appender-ref ref="FILE"/>
</appender> <!-- You need to override the log level , Reduce unwanted log output -->
<logger name="sun.rmi" level="error"/>
<logger name="sun.net" level="error"/>
<logger name="javax.management" level="error"/>
<logger name="org.redisson" level="warn"/>
<logger name="com.zaxxer" level="warn"/> <root level="DEBUG" additivity="false">
<appender-ref ref="STDOUT"/>
<appender-ref ref="ASYNC_FILE"/>
</root>
</configuration>

Summary

This article only introduces logback Some basic configurations and practical experience of the latest version , It is also archived as a journal note that can be picked up and used at any time in the future .

Reference material :

( The end of this paper c-2-d e-a-20220212 This oneortwo month fund is a little scary )

logback1.3.x More related articles on configuration details and practice

  1. kafka Principles and practices ( 5、 ... and )spring-kafka Configuration details

    Series catalog kafka Principles and practices ( One ) principle :10 Minute introduction kafka Principles and practices ( Two )spring-kafka Simple practice kafka Principles and practices ( 3、 ... and )spring-kafka Producer source code kafka Principles and practices ( ...

  2. JSHint Configuration details

    Also available on Github JSHint Configuration details Enhancement parameters (Enforcing Options) This type of parameter is set to true,JSHint More alarms will be generated . bitwise Disable bitwise operators ...

  3. maven Common plug-in configuration details

    Common plug-in configuration details Java Code     <!--  Global property configuration  --> <properties> <project.build.name>tools</proje ...

  4. Use LVS Implementation of load balancing principle and installation configuration details

    The load balancing cluster is load balance Short for cluster , Translation into Chinese is load balancing cluster . Commonly used load balancing open source software is nginx.lvs.haproxy, Commercial hardware load balancing equipment F5.Netscale. It's mainly about learning ...

  5. JAVA Environment variable configuration details (Windows)

    JAVA Environment variable configuration details (Windows)   JAVA environment variable JAVA_HOME.CLASSPATH.PATH Setup details  Windows Next JAVA The main environment variables used are 3 individual ,JAVA_HOME.CLA ...

  6. MapReduce On Yarn Detailed configuration and routine maintenance

    MapReduce On Yarn Detailed configuration and routine maintenance author : Yin Zhengjie Copyright notice : Original works , Declined reprint ! Otherwise, the legal liability will be investigated . One .MapReduce Operation and maintenance overview MapReduce on YARN The operation and maintenance of is mainly ...

  7. Web.xml Configuration details ( turn )

    Web.xml Configuration details Posted on 2010-09-02 14:09 chinaifne  read (295105) Comment on (16)  edit   Collection 1 Define the head and root elements The deployment descriptor file is like all XML Document I ...

  8. Log4j Configuration details ( turn )

    One .Log4j brief introduction Log4j There are three main components :Loggers( Recorder ),Appenders ( output source ) and Layouts( Layout ). It can be simply understood as log category , Where the log is to be output and in what form . Comprehensive envoy ...

  9. logback General configuration details &lt;appender&gt;

    logback General configuration details  <appender> <appender>: <appender> yes <configuration> Child nodes of , I'm in charge of the journal ...

  10. [ turn ] Alibaba database connection pool druid Configuration details

    One . background java A large part of the program operates on the database , In order to improve performance, when operating the database , You have to use the database connection pool . There are many options for database connection pooling ,c3p.dhcp.proxool etc. ,druid As a rising star , With its brilliance ...

Random recommendation

  1. JS Component series ——Bootstrap File upload component :bootstrap fileinput

    Preface : The previous three articles introduced bootstrap table Some common uses of , I found that bloggers are a little fascinated by this flat style . I made one two days ago excel Import function , The front end uses the original input type='file' This label , ...

  2. remove mysql

    apt-get --purge remove mysql-server mysql-client mysql-common apt-get autoremove rm -rf /etc/mysql r ...

  3. photon mapping A staged summary

    PM The algorithm has been read for so long , It's time to sum up . What I realize is PPPM(Probabilistic progressive photon mapping) A simplified form of . The reason for the simplified form is that when I collect photons ...

  4. Terminal programming c Program

    The copyright belongs to the author . Commercial reprint please contact the author for authorization , Non-commercial reprint please indicate the source . author :JackAlan link :http://www.zhihu.com/question/21483375/answer/322672 ...

  5. WPF Single thread timer Simple example

    // The form is loaded void MyMessageBox_Loaded(object sender, RoutedEventArgs e) { // Start timing countdown , Multithreaded timing //System.Thre ...

  6. Fresco Explore the internal implementation principle of picture frame

    Popular network framework The current popular network picture frame : Picasso.Universal Image Loader.Volley Of (ImageLoader.NetworkImageView).Glide and Fres ...

  7. Small test ImageMagik—— Development of article

    ===================================================== ImageMagick Article on the use and development of : Small test ImageMagik-- Use article Small test Imag ...

  8. Android studio Import SlidingMenu problem

    Our import library In folder build.gradle It's very clear in the document : android {     compileSdkVersion 17     buildToolsVersion &q ...

  9. Redis From entry to mastery : Intermediate ( turn )

    Link to the original text :http://www.cnblogs.com/xrq730/p/8944539.html, Reprint please indicate the source , thank you Contents of this article Last article to know Redis Mainly , Yes Redis The first in the series , Now on ...

  10. Containing lithium battery PCBA How to package when transporting express ?

    Containing lithium battery PCBA How to package when transporting express ? PCBA And battery must be fixed . PCBA And batteries must be packed separately . The shell of independent packaging must be hard packaging , Prevent short circuit caused by squeezing during transportation . The battery charge is 80% Or below .