当前位置:网站首页>Starting from 1.5, build a micro Service Framework - call chain tracking traceid

Starting from 1.5, build a micro Service Framework - call chain tracking traceid

2022-07-06 00:36:00 InfoQ

Hello , I'm Wukong .

Preface

Recently, I am building a basic version of the project framework , be based on  SpringCloud  Microservice framework .

If you put  SpringCloud  This framework is used as  
1
, Now there are some basic components, such as  swagger/logback  Wait, that's it  0.5 , Then I'm here  1.5  Assemble based on , Complete a microservice project framework .

Why build the second generation wheels ? Isn't the ready-made project framework on the market fragrant ?

Because the project team is not allowed to use external ready-made frameworks , such as  Ruoyi. In addition, our project needs have their own characteristics , Technology selection will also choose the framework we are familiar with , So it's also a good choice to build the second generation wheels by yourself .

Core functions

The following core functions need to be included :

  • Split multiple micro service modules , Draw out one  demo  Microservice module for expansion , Completed
  • Extract core framework modules , Completed
  • Registry Center  Eureka, Completed
  • The remote invocation  OpenFeign, Completed
  • journal  logback, contain  traceId  track , Completed
  • Swagger API  file , Completed
  • Configure file sharing , Completed
  • Log retrieval ,ELK Stack, Completed
  • Customize  Starter, undetermined
  • Consolidated cache  Redis,Redis  Sentinel high availability , Completed
  • Consolidate databases  MySQL,MySQL  High availability , Completed
  • Integrate  MyBatis-Plus, Completed
  • Link tracking component , undetermined
  • monitor , undetermined
  • Tool class , To be developed
  • gateway , Technology selection to be determined
  • Audit log entry  ES, undetermined
  • distributed file system , undetermined
  • Timing task , undetermined
  • wait

This article is about log link tracking .

One 、 Pain points

It hurts a little : Multiple logs in the process cannot be traced

A request calls , Suppose you will call more than a dozen methods on the back end , Print the log more than ten times , These logs cannot be concatenated .

As shown in the figure below : The client calls the order service , Methods in order service  A  Calling method  B, Method  B  Calling method  C.

Method  A  Print the first log and the fifth log , Method  B  Print the second log , Method  C  Print the third log and the fourth log , But this  5  This log has no connection , The only connection is that time is printed in chronological order , But if there are other concurrent request calls , It will interfere with the continuity of the log .

null

Pain point two : How to correlate logs across Services

Every micro service will record its own log of this process , How to correlate logs across processes ?

As shown in the figure below : Order service and coupon service belong to two micro Services , Deployed on two machines , Order service  A  Method to call the coupon service remotely  D  Method .

Method  A  Print the log to the log file  1  in , Recorded  5  Logs , Method  D  Print the log to the log file  2  in , Recorded  5  Logs . But this  10  Logs cannot be associated .

null

Pain point three : How to correlate logs across threads

How the logs of the main thread and the sub thread are related ?

As shown in the figure below : Method of main thread  A  Started a sub thread , The sub thread executes the method  E.

Method  A  Printed the first log , Sub thread  E  The second log and the third log were printed .

null

It hurts four : The third party calls our service , How to track ?

The core problem to be solved in this article is the first and second problems , Multithreading has not been introduced yet , At present, there is no third party to call , We will optimize the third and fourth questions later .

Two 、 programme

1.1  Solution

①  Use  Skywalking traceId  Link tracking

②  Use  Elastic APM  Of  traceId  Link tracking

③ MDC  programme : Make your own  traceId  and  put  To  MDC  Inside .

At the beginning of the project , Don't introduce too many middleware , Try a simple and feasible solution first , So here is the third solution  MDC.

1.2 MDC  programme

MDC(Mapped Diagnostic Context) Used to store context data for a specific thread running context . therefore , If you use  log4j  Logging , Each thread can have its own MDC, The  MDC  Global to the entire thread . Any code belonging to the thread can easily access the thread's  MDC  Exists in .

3、 ... and 、 Principle and practice

2.1  Track multiple logs of a request

Let's first look at the first pain point , How to in a request , Connect multiple logs .

The principle of this scheme is shown in the figure below :

null
(1) stay  logback  Add... To the log format in the log configuration file  %X{traceId}  To configure .

<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %X{traceId} %-5level %logger - %msg%n</pattern>

(2) Customize an interceptor , From request  
header
  In order to get  
traceId
 , If it exists, put it in  MDC  in , Otherwise, use it directly  UUID  treat as  traceId, Then put  MDC  in .

(3) Configure interceptors .

When we print the log , It will print automatically  traceId, As shown below , Of multiple logs  traceId  identical .

null

Sample code

Interceptor code :

/**
 * @author www.passjava.cn, official account : Wukong chat structure
 * @date 2022-07-05 
 */
@Service
public class LogInterceptor extends HandlerInterceptorAdapter {

 private static final String TRACE_ID = &quot;traceId&quot;;

 @Override
 public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {
 String traceId = request.getHeader(TRACE_ID);
 if (StringUtils.isEmpty(traceId)) {
 MDC.put(&quot;traceId&quot;, UUID.randomUUID().toString());
 } else {
 MDC.put(TRACE_ID, traceId);
 }

 return true;
 }

 @Override
 public void postHandle(HttpServletRequest request, HttpServletResponse response, Object handler, ModelAndView modelAndView) throws Exception {
 // Prevent memory leaks
 MDC.remove(&quot;traceId&quot;);
 }
}

Configure interceptors :

/**
 * @author www.passjava.cn, official account : Wukong chat structure
 * @date 2022-07-05 
 */
@Configuration
public class InterceptorConfig implements WebMvcConfigurer {

 @Resource
 private LogInterceptor logInterceptor;

 @Override
 public void addInterceptors(InterceptorRegistry registry) {
 registry.addInterceptor(logInterceptor).addPathPatterns(&quot;/**&quot;);
 }
}

2.2  Track multiple logs across Services

The schematic diagram of the solution is shown below :

null
The order service calls the coupon service remotely , You need to add  OpenFeign  Interceptor , What the interceptor does is go   Requested  header  Add  traceId, When calling the coupon service in this way , From the  header  Get this request  traceId.

The code is as follows :

/**
 * @author www.passjava.cn, official account : Wukong chat structure
 * @date 2022-07-05 
 */
@Configuration
public class FeignInterceptor implements RequestInterceptor {
 private static final String TRACE_ID = &quot;traceId&quot;;

 @Override
 public void apply(RequestTemplate requestTemplate) {
 requestTemplate.header(TRACE_ID, (String) MDC.get(TRACE_ID));
 }
}

In the logs printed by two microservices , Two logs  traceId  Agreement .

null
Of course, these logs will be imported into  Elasticsearch  Medium , And then through  kibana  Visual interface search  traceId, You can string the whole call link !

Four 、 summary

This article passes the interceptor 、MDC  function , The full link is added  traceId, And then  traceId  Output to log , You can trace the call link through the log . Whether it's in-process method level calls , Or cross process service invocation , Can be tracked .

In addition, the log also needs to pass  ELK Stack  Technology imports logs into  Elasticsearch  in , Then you can search  traceId, The whole call link is retrieved .

- END -
原网站

版权声明
本文为[InfoQ]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/187/202207060033007689.html

随机推荐