当前位置:网站首页>Source code analysis of current limiting component uber/ratelimit

Source code analysis of current limiting component uber/ratelimit

2022-06-24 06:28:00 Johns

1. brief introduction

Based on the analysis of uber/ratelimit Before component design , We need to know the function of this component .ratelimit Seeing the name and thinking the meaning is used to do " Frequency control " Of , yes Leaky Bucket An implementation of the algorithm .

2. Use cases

My current version is v.2.0 edition , The following case is also an official case .

func TestExample(t *testing.T) {
 //  Initialize a QPS=100 The current limiter of 
  rl := ratelimit.New(100) // per second

  prev := time.Now()
  for i := 0; i < 10; i++ {
     now := rl.Take()
     if i > 0 {
        fmt.Println(i, now.Sub(prev))
    }
     prev = now
  }
}

Output results :

=== RUN   TestExample
1 10ms
2 10ms
3 10ms
4 10ms
5 10ms
6 10ms
7 10ms
8 10ms
9 10ms
--- PASS: TestExample2 (0.09s)
PASS

3. Source code analysis

To achieve the effect of processing requests at a fixed rate , We just need to go through ratelimit Of New Function , Incoming parameter rate Indicates the number of requests allowed per second (RPS). A simple conversion shows the time interval between each request .

limiter.perRequest = time.Second / time.Duration(rate)

Now suppose we set up rate=100, Then through calculation, we can know perRequest=10ms. When requested R1 After processing , We record the request R1 The moment when the processing of , Write it down as last. Request later R2 arrival , If the time at this moment last Compared to not reaching perRequest Interval size of , that sleep A period of time is enough .

Leaky bucket algorithm

Secondly, the traditional leaky bucket algorithm has an obvious disadvantage that it can not adapt to the scene of burst traffic , uber-go In order to alleviate this problem, relaxation is introduced (maxSlack) The concept of , It gives the leaky bucket a certain token storage function . Personally, I think this idea is from Token bucket algorithm The introduction of .

maxSlack Variable determines that a certain number of tokens can be stored when the server is idle , When encountering burst traffic, these tokens can be used for emergency . Let's take a look ratelimit How many tokens are allowed to be stored by default .

// buildConfig combines defaults with options.
func buildConfig(opts []Option) config {
  c := config{
     clock: clock.New(),
     slack: 10,
     per:   time.Second,
  }
...
}

We can see that the default can store 10 A token , As long as the server is idle for more than 100ms, Then even if it suddenly comes at the same time 10 A request , The system can also handle .

The last problem is how to deal with high concurrency scenarios “ Race question ” 了 , If 10ms Inside is multiple requests , Then these requests are bound to compete , Only those who compete successfully , To be eligible to enter sleep Get execution after the phase , Other requests will be made Waiting. In this version, I can see two implementations :

One is based on lock The way , In this way, the corresponding goroutine Get into block state ;

// limiter_mutexbased.go  Inside Toke() Realization 
func (t *mutexLimiter) Take() time.Time {
  t.Lock()
  defer t.Unlock()
  ...
  return t.last
}

One way is to use for and atomic.CompareAndSwapPointer Spin with , This approach does not result in a request corresponding to goroutine Get into block , But it will consume a small amount of memory resources .Uber Now the second method is adopted by default “ Race question ”.

// limiter_atomic.go  Inside Toke() Realization 
func (t *atomicLimiter) Take() time.Time {
  ...
 //  Competitive environment , If you don't get the right to execute, you will spin 
  for !taken {
   ...
    //  There will be CAS Try to get the right to execute 
     taken = atomic.CompareAndSwapPointer(&t.state, previousStatePointer, unsafe.Pointer(&newState))
  }
  t.clock.Sleep(interval)
  return newState.last
}

4. More

1. Customize Clock

If you do uber/ratelimit Source code , Then you can see that there is another one clock Configuration fields , The default is go Native time clock, Precision in ns Level , If you need a finer clock , You can use... During initialization WithClock Switch .

type MyClock struct{}

func (*MyClock) Now() time.Time {
  return time.Now()
}

func (*MyClock) Sleep(d time.Duration) {
  time.Sleep(d)
}

func TestExample(t *testing.T) {
//  Use custom clock
myClock := &MyClock{}
rl := ratelimit.New(100, ratelimit.WithClock(myClock)) // per second

prev := time.Now()
for i := 0; i < 10; i++ {
now := rl.Take()
if i > 0 {
fmt.Println(i, now.Sub(prev))
}
prev = now
}
}

2. The future of current limiting components

Current limiting is a big topic , Leaky bucket algorithm is just a product of its development , With the evolution of technology, it is obvious that the demands of businesses on current limiting components are not as simple as setting a threshold , Eventually, I will definitely go to Adaptive algorithm evolution . Therefore, you will find the current mainstream current limiting components Hystrix,Sentinel Basically, dynamic and adaptive current limiting schemes are provided . And the age of cloud primordial has come , In the future, the current limiting component will be opened to developers as a standardized basic function , What we need to do is to keep an original heart of constant exploration .

原网站

版权声明
本文为[Johns]所创,转载请带上原文链接,感谢
https://yzsam.com/2021/07/20210716195545115T.html

随机推荐