Hello everyone , I am a 「Go School 」 Fisherman's son . Today, let's talk about optimization in the project redis Write down cpu An experience of usage . Link to the original text :[https://mp.weixin.qq.com/s/16Fn7LahXSadTHS0NXcapQ](https://mp.weixin.qq.com/s/16Fn7LahXSadTHS0NXcapQ)**01 background ** This article is based on redis A function for recording real-time requests , Caused by the increase of traffic redis Server's CPU higher than 80% And triggered the automatic alarm mechanism , After analysis, it will be written in real time redis The method of is changed to the method of batch writing , So that CPU Usage has decreased 30% Left right experience . The specific business requirements are as follows : We will divide the received requests by geographical attributes . The goal is to control the total number of country specific requests . When the preset maximum number of requests is reached , The traffic is no longer processed , Return directly to the client 204 Respond to . If the maximum number of requests is not reached , The number of real-time requests +1. As shown in the figure below :**02 Implementation version 1 ** The first version was simple , Is to store the maximum value in redis in , Then record the number of real-time requests for traffic in each country in the dimension of days . Every time the traffic comes , First, query the maximum flow of the country , And the number of real-time requests for the day , Then make a comparison , If the real-time number has exceeded the maximum , Go straight back , Otherwise, the real-time data will be +1 Just operate . Now let's take a look from China ( use CN Express ) Flow as an example . First , We exist redis Medium key The rules are as follows :- Representing the maximum request value that a country can accept key To express a rule : Country :max:req - Represents the number of requests received from a country on that day key To express a rule : Country :YYYYMMDD:req , Valid for N God . The implementation code of the first version is as follows :```gofunc HasExceedLimitReq() bool { key := "CN:max:req" maxReq := redis.Get(key) day := time.Now().Format("20060102") dailyKey := "CN:"+day+":req" dailyReq := redis.Get(dailyKey) if dailyReq > maxReq { return true } redis.Incr(dailyKey, dailyReq) redis.Expire(dailyKey, 7*24*time.Hour) return false}``` In the above implementation , about dailyKey We don't need to keep it for a long time , In fact, after the day , The key The value of is useless , For the reason of querying historical data , We set it up 7 Days of validity . but redis Of Incr Operation without expiration time , So it's in Incr An... Is added after the operation Expire The operation of . Okay , Let's take a look at the problems with this implementation . First of all, there is no logical problem . When a request comes in , In the absence of excess , We will be right. redis Yes 4 operations : Two query operations and two write operations (incr and expire). in other words ,redis Carrying QPS It is the flow itself 4 times . If the flow QPS Growing time , For example, we have reached 10 ten thousand , that redis The number of requests received is 40 ten thousand .redis Of CPU Consumption naturally comes up . So let's see what can be optimized ? The first is Expire The operation doesn't seem to be required every time , Theoretically, just set the expiration time once , You don't need to set... Every time , This will reduce one write operation . Version 2 is implemented as follows **03 Implementation version 2 : Reduce Expire Number of executions ** We use a hasUpdateExpire Of map type , To record a key Whether the validity period has been set . as follows :```govar hasUpdateExpire = make(map[string]struct{}) // Global variables func HasExceedLimitReq() bool { key := "CN:max:req" maxReq := redis.Get(key) day := time.Now().Format("20060102") dailyKey := "CN:"+day+":req" dailyReq := redis.Get(dailyKey) if dailyReq > maxReq { return true } redis.Incr(dailyKey, dailyReq) if hasUpdateExpire[dailyKey]; !ok { redis.Expire(dailyKey, 7*24*time.Hour) hasUpdateExpire[dailyKey] = struct{}{} } return false}``` We know that Go in ,map It is non concurrent and safe . So the following code has concurrency security :```go if hasUpdateExpire[dailyKey]; !ok { redis.Expire(dailyKey, 7*24*time.Hour) hasUpdateExpire[dailyKey] = struct{}{} }``` That is to say, it is possible that multiple coroutines are executed at the same time `if hasUpdateExpire[dailyKey]` here , And they all got ok by false Value , At this time, multiple coroutines will execute the following two lines of code :```goredis.Expire(dailyKey, 7*24*time.Hour)hasUpdateExpire[dailyKey] = struct{}{}``` But here, according to our business scenario , Even if it is executed several more times Expire Operation doesn't matter , stay QPS High case , More settings than the total number of requests expire Several times you can ignore . Then if qps What if we continue to increase ? That is asynchronous batch writing . This writing method is suitable for scenarios that do not require accurate counting . Let's take a look at version three .**04 Implement version 3 : Asynchronous batch write ** In this version , Our technology does not directly write redis, It is written in the memory cache , In a global variable , Start a timer at the same time , Each time, the data in the memory is written in batches to redis in . As shown in the figure below : therefore We define the following data structure :```goimport ( "sync" "time" "github.com/go-redis/redis")const ( DefaultExpiration = 86400 * time.Second * 7)type CounterCache struct { rwMu sync.RWMutex redisClient redis.Cmdable countCache map[string]int64 hasUpdateExpire map[string]struct{}}func NewCounterCache(redisClient redis.Cmdable) *CounterCache { c := &CounterCache{ redisClient: redisClient, countCache: make(map[string]int64), } go c.startFlushTicker() return c}func (c *CounterCache) IncrBy(key string, value int64) int64 { val := c.incrCacheBy(key, value) redisCount, _ := c.redisClient.Get(key).Int64() return val + redisCount}func (c *CounterCache) incrCacheBy(key string, value int64) int64 { c.rwMu.Lock() defer c.rwMu.Unlock() count := c.countCache[key] count += value c.countCache[key] = count return count}func (c *CounterCache) Get(key string) (int64, error) { cacheVal := c.get(key) redisValue, err := c.redisClient.Get(key).Int64() if err != nil && err != redis.Nil { return cacheVal, err } return redisValue + cacheVal, nil}func (c *CounterCache) get(key string) int64 { c.rwMu.RLock() defer c.rwMu.RUnlock() return c.countCache[key]}func (c *CounterCache) startFlushTicker() { ticker := time.NewTicker(time.Second * 5) for { select { case <-ticker.C: c.flush() } }}func (c *CounterCache) flush() { var oldCountCache map[string]int64 c.rwMu.Lock() oldCountCache = c.countCache c.countCache = make(map[string]int64) c.rwMu.Unlock() for key, value := range oldCountCache { c.redisClient.IncrBy(key, value) if _, ok := c.hasUpdateExpire[key]; !ok { err := c.redisClient.Expire(key, DefaultExpiration) if err == nil { c.hasUpdateExpire[key] = struct{}{} } } }}``` The main idea here is to temporarily store the structure when writing data countCache in . Then each CounterCache Instances will start a timer ticker, The timer will set at regular intervals countCache Update the data to redis in . Let's take a look at how this is used :```gopackage mainimport ( "net/http" "sync" "time" "github.com/go-redis/redis")var counterCache *CounterCachefunc main() { redisClient := redis.NewClient(&redis.Options{ Addr: "127.0.0.1:6379", Password: "", }) counterCache = NewCounterCache(redisClient) http.HandleFunc("/", IndexHandler) http.ListenAndServe(":8080", nil)}func IndexHandler(w http.ResponseWriter, r *http.Request) { if HasExceedLimitReq() { return } // Handle normal logic }func HasExceedLimitReq() bool { maxKey := "CN:max:req" maxCount, _ := counterCache.Get(maxKey) dailyKey := "CN:" + time.Now().Format("20060102") + ":req" dailyCount, _ := counterCache.Get(dailyKey) if dailyCount > maxCount { return true } counterCache.IncrBy(dailyKey, 1) return false}``` The usage scenario here is used when the counting is not required to be accurate . For example, if the server exits abnormally , Then it exists temporarily countCache Has not had time to refresh to redis Data in will be lost . Another thing to note is countCache The variable is a map, We know , stay Go in map Is a non concurrent safe operation , So pay attention to add read-write lock .**05 summary ** With the service qps The growth of , We are not limiting qps Under the premise of , The utilization rate of various resources will increase . Our optimization idea is to reduce the number of unnecessary writes 、 The idea of changing from real-time write to batch write , So as to reduce the impact on redis Purpose of operation . This counting method is used in situations where the counting requirements are not so accurate , For example, the number of videos played 、 Weibo is big V And so on .
原网站版权声明
本文为[Golang Chinese community]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/03/202203021035275210.html