当前位置:网站首页>Go implements concurrent non blocking caching
Go implements concurrent non blocking caching
2022-06-13 00:13:00 【Ethan97】
Let's start with an example Function memory :
// A Memo caches the results of calling a Func.
type Memo struct {
f Func
cache map[string]result
}
// Func is the type of the function to memoize.
type Func func(key string) (interface{
}, error)
type result struct {
value interface{
}
err error
}
func New(f Func) *Memo {
return &Memo{
f: f, cache: make(map[string]result)}
}
// NOTE: not concurrency-safe!
func (memo *Memo) Get(key string) (interface{
}, error) {
res, ok := memo.cache[key]
if !ok {
res.value, res.err = memo.f(key)
memo.cache[key] = res
}
return res.value, res.err
}
The function f Is a heavyweight computing function , It is expensive to call it , So cache the results in a map To speed up each call . This is functional memory .
Every time you call Get, Will be taken from memo Query results in , If we don't find , You have to call the function f The result of the calculation is , Record it in the cache .
The above implementation Get Method updates the cache without using synchronization cache, Whole Get Functions are not concurrency safe .
Consider each call Get All methods are locked :
type Memo struct {
f Func
mu sync.Mutex // guards cache
cache map[string]result
}
// Get is concurrency-safe.
func (memo *Memo) Get(key string) (value interface{
}, err error) {
memo.mu.Lock()
res, ok := memo.cache[key]
if !ok {
res.value, res.err = memo.f(key)
memo.cache[key] = res
}
memo.mu.Unlock()
return res.value, res.err
}
Because each call requests a mutex ,Get The parallel request operation is serialized again .
Consider the following improvements :
func (memo *Memo) Get(key string) (value interface{
}, err error) {
memo.mu.Lock()
res, ok := memo.cache[key]
memo.mu.Unlock()
if !ok {
res.value, res.err = memo.f(key)
// Between the two critical sections, several goroutines
// may race to compute f(key) and update the map.
memo.mu.Lock()
memo.cache[key] = res
memo.mu.Unlock()
}
return res.value, res.err
}
This version acquires locks twice : First used for query cache , The second time is used to update the query when there is no result .
In an ideal situation , We should avoid this extra processing . This function is sometimes called repetition inhibition (duplication suppression).
In the fourth version of the cache , For each entry A new channel has been added ready. After setting up entry Of result Field after , The channel will close , Waiting for goroutine Will receive a broadcast , You can start from entry Read the results in .
// Func is the type of the function to memoize.
type Func func(string) (interface{
}, error)
type result struct {
value interface{
}
err error
}
type entry struct {
res result
ready chan struct{
} // closed when res is ready
}
func New(f Func) *Memo {
return &Memo{
f: f, cache: make(map[string]*entry)}
}
type Memo struct {
f Func
mu sync.Mutex // guards cache
cache map[string]*entry // Now the cache returns a entry
}
func (memo *Memo) Get(key string) (value interface{
}, err error) {
memo.mu.Lock()
e := memo.cache[key]
if e == nil {
// This is the first request for this key.
// This goroutine becomes responsible for computing
// the value and broadcasting the ready condition.
e = &entry{
ready: make(chan struct{
})}
memo.cache[key] = e
memo.mu.Unlock()
e.res.value, e.res.err = memo.f(key)
close(e.ready) // broadcast ready condition
} else {
// This is a repeat request for this key.
memo.mu.Unlock()
<-e.ready // wait for ready condition
}
return e.res.value, e.res.err
}
When Get Function discovery cache memo When there is no record in , It constructs a entry Put it in the cache , But at this moment key The corresponding result has not been calculated yet .
At this time , If other goroutine Called Get The function queries the same key when , It will arrive <-e.ready Statement and blocked waiting for channel data . Only when the calculation ends , Responsible for the calculation results goroutine After closing the channel , Other goroutine To be able to continue , And find out the result .
- When one goroutine When trying to query a result that does not exist , It creates a
entryPut it in the cache , And unlock , And then callfCalculate . Update the corresponding after the calculationentryIt can bereadyThe passage is closed ; - When one goroutine When trying to query an existing result , He should give up the lock immediately , And wait to find out
entryThe closing of the channel .
Another design
So that's the introduction Share variables and lock Methods , Another option is Communication sequence process .
In the new design ,map Variables are limited to one monitor goroutine in , and Get The caller of has to change to Send a message .
// Func is the type of the function to memoize.
type Func func(key string) (interface{
}, error)
// A result is the result of calling a Func.
type result struct {
value interface{
}
err error
}
type entry struct {
res result
ready chan struct{
} // closed when res is ready
}
// A request is a message requesting that the Func be applied to key.
type request struct {
key string
response chan<- result // the client wants a single result
}
type Memo struct{
requests chan request }
// New returns a memoization of f. Clients must subsequently call Close.
func New(f Func) *Memo {
memo := &Memo{
requests: make(chan request)}
go memo.server(f)
return memo
}
func (memo *Memo) Get(key string) (interface{
}, error) {
response := make(chan result)
memo.requests <- request{
key, response}
res := <-response
return res.value, res.err
}
func (memo *Memo) Close() {
close(memo.requests) }
//!-get
//!+monitor
func (memo *Memo) server(f Func) {
cache := make(map[string]*entry)
for req := range memo.requests {
e := cache[req.key]
if e == nil {
// This is the first request for this key.
e = &entry{
ready: make(chan struct{
})}
cache[req.key] = e
go e.call(f, req.key) // call f(key)
}
go e.deliver(req.response)
}
}
func (e *entry) call(f Func, key string) {
// Evaluate the function.
e.res.value, e.res.err = f(key)
// Broadcast the ready condition.
close(e.ready)
}
func (e *entry) deliver(response chan<- result) {
// Wait for the ready condition.
<-e.ready
// Send the result to the client.
response <- e.res
}
Get Method to create a response passageway , And put it in a request , Then send it to the monitor goroutine, And then immediately start from your own response Read in channel .
monitor goroutine( namely server Method ) Constantly from request Read in channel , Until the channel is closed . For each request , It first queries from the cache , If not found, create and insert a new entry:
monitor goroutine So let's create one entry Put it in the cache , And then it calls go e.call(f, req.key) Create a gorouitne To calculate the result 、 close ready passageway . At the same time it calls go e.deliver(req.response) wait for ready The passage is closed , And send the results to response In the passage ;
If you monitor goroutine Results found directly from the cache , So according to key Checked up entry Already contains a closed channel , It calls go e.deliver(req.response) You can put the results directly into response In the passage .
Compare the two methods , The first method directly calls Get Method ,Get Method is directly responsible for concurrency control ; In the second method Get Method sends a request to the channel , And then in response Waiting for a response in the channel , The specific concurrency control is monitored by goroutine Realization .
边栏推荐
- [matlab] symbol calculation
- TypeError: wave. ensureState is not a function
- How to quickly query the mobile phone number home and operator
- OSM地图本地发布-如何生成各省市矢量地图
- [LeetCode]3. The longest substring without duplicate characters forty
- 如何让矢量瓦片配图神器maputnik支持 geoserver
- What are the PMP scores?
- Machining Industry MES system Mold Industry MES system CNCl Medium Industry MES System MES code scanning and reporting MES data collection
- 63. different paths II
- SAP Business Technology Platform (BTP) workflow function introduction
猜你喜欢

OSM map local publishing - how to generate vector maps of provinces and cities

MySQL index

Tsinghua Bosch joint ml center, thbi lab:cheng Yang Ying | realize safety reinforcement learning through the value at risk of constraints

你真的会用PostGIS中的buffer缓冲吗?

Learn to divide subnets in an article

What are the levels of safety accidents

Is the newly graduated college student taking BEC or PMP? PM who wants to transfer to another job in the future

Explain bio, NiO, AIO in detail
![[hcie discussion] multicast igmp-a](/img/1a/74c6e698c9a6a6c7a4c32153a0a277.png)
[hcie discussion] multicast igmp-a

PLC peut également faire des jeux - - codesys écrit des jeux de devinettes numériques
随机推荐
Is the PMP training organization an actual training?
Do you really use the buffer buffer in PostGIS?
分公司能与员工签劳动合同么
June 13, 2022 Daily: Turing prize winner: what should we pay attention to if we want to succeed in our academic career?
[matlab] basic operation
La différence entre philosophie et littérature
Enterprise wechat H5_ Authentication, PC website, enterprise wechat scanning code, authorized login
2022施工员-设备方向-通用基础(施工员)操作证考试题及模拟考试
进程间通信-共享内存shmat
Test platform series (97) perfect the case part
[matlab] polynomial calculation
Start of u-boot_ Armboot analysis (I)
PLC peut également faire des jeux - - codesys écrit des jeux de devinettes numériques
2022年6月11日记:王老师的春天,混入
On the parameters of main function in C language
Matlab [path planning] - UAV drug distribution route optimization
机加工行业MES系统模具行业MES系统CNCl中工行业MES系统MES扫码报工MES数据采集
How to pass the PMP review?
Accelerating with Dali modules
How to quickly query the online status of mobile phones