当前位置:网站首页>Go zero micro Service Practice Series (VII. How to optimize such a high demand)
Go zero micro Service Practice Series (VII. How to optimize such a high demand)
2022-06-29 22:44:00 【Microservice practice】
In the first two articles, we introduced various best practices for cache usage , First, we introduce the basic postures used by cache , How to use go-zero How to write the cache code in the automatically generated cache and logic code , Then I explained the penetration in the face of cache 、 breakdown 、 Solutions to common problems such as avalanches , Finally, it focuses on how to ensure cache consistency . Because caching is very important for high concurrency services , So we will continue to learn about caching in this article .
Local cache
When we encounter extreme hot data queries , At this time, local cache should be considered . The hotspot local cache is mainly deployed in the code of the application server , Used to block hotspot queries for Redis Equally distributed cache or database pressure .
In our mall , home page Banner There will be some advertising products or recommended products in the , The information of these commodities is entered and changed by the operation in the management background . The demand for these goods is very large , Even if it's Redis It's hard to carry , So here we can use local cache to optimize .

stay product Create a commodity operation table in the warehouse product_operation, For simplicity, keep only the necessary fields ,product_id To promote the goods of the operation id,status Is the status of the operating commodity ,status by 1 It will be on the home page Banner Show the product in .
CREATE TABLE `product_operation` (
`id` bigint unsigned NOT NULL AUTO_INCREMENT,
`product_id` bigint unsigned NOT NULL DEFAULT 0 COMMENT ' goods id',
`status` int NOT NULL DEFAULT '1' COMMENT ' Operation commodity status 0- Offline 1- go online ',
`create_time` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT ' Creation time ',
`update_time` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT ' Update time ',
PRIMARY KEY (`id`),
KEY `ix_update_time` (`update_time`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COMMENT=' Commodity operation table ';
The implementation of local caching is relatively simple , We can use map To do it on your own , stay go-zero Of collection Provided in Cache To realize the function of local cache , Let's just use it , It is never a wise choice to build the wheel repeatedly ,localCacheExpire Expiration time for local cache ,Cache Provides Get and Set Method , Very simple to use
localCache, err := collection.NewCache(localCacheExpire)
First look up from the local cache , If it hits the cache, it returns . If the cache is not hit, you need to query the operating bit commodity from the database first id, Then aggregate the product information , Finally, it is pushed back into the local cache . The detailed code logic is as follows :
func (l *OperationProductsLogic) OperationProducts(in *product.OperationProductsRequest) (*product.OperationProductsResponse, error) {
opProducts, ok := l.svcCtx.LocalCache.Get(operationProductsKey)
if ok {
return &product.OperationProductsResponse{Products: opProducts.([]*product.ProductItem)}, nil
}
pos, err := l.svcCtx.OperationModel.OperationProducts(l.ctx, validStatus)
if err != nil {
return nil, err
}
var pids []int64
for _, p := range pos {
pids = append(pids, p.ProductId)
}
products, err := l.productListLogic.productsByIds(l.ctx, pids)
if err != nil {
return nil, err
}
var pItems []*product.ProductItem
for _, p := range products {
pItems = append(pItems, &product.ProductItem{
ProductId: p.Id,
Name: p.Name,
})
}
l.svcCtx.LocalCache.Set(operationProductsKey, pItems)
return &product.OperationProductsResponse{Products: pItems}, nil
}
Use grpurl Debug tool request interface , First request cache miss after , Subsequent requests will hit the local cache , When the local cache expires, it will go back to the source again db Load data into local cache
~ grpcurl -plaintext -d '{}' 127.0.0.1:8081 product.Product.OperationProducts
{
"products": [
{
"productId": "32",
"name": " Electric fan 6"
},
{
"productId": "31",
"name": " Electric fan 5"
},
{
"productId": "33",
"name": " Electric fan 7"
}
]
}
Be careful , Not all information is suitable for local caching , The characteristic of local cache is that the request volume is very high , At the same time, certain inconsistencies can be allowed in business , Because the local cache generally does not take the initiative to update , You need to wait until it expires and go back to the source db Update later . Therefore, in the business, it depends on the situation to see whether the local cache is required .
Automatically identify hotspot data
home page Banner Scenarios are configured by operators , That is, we can know the hot data that may be generated in advance , But in some cases, we can not predict in advance that data will become a hot spot . So we need to be able to identify these hot data adaptively and automatically , Then promote the data to the local cache .
We maintain a sliding window , For example, the sliding window is set to 10s, It's about statistics 10s What's inside key Accessed by high frequency , One sliding window corresponds to multiple Bucket, Every Bucket Corresponding to one in map,map Of key For commodities id,value The number of requests corresponding to the product . Then we can do it regularly ( such as 10s) To count all the current Buckets Medium key The data of , Then import the data into the big top heap , It's easy to get from the big top pile topK Of key, We can set a threshold , For example, a certain time in a sliding window key Visit frequency exceeds 500 Time , Just think the key As a hot spot key, So that the key Upgrade to local cache .

Cache usage tips
Here are some tips for caching
key Try to make the naming easy to read , That is to say, you can see the name and know the meaning , The length should be as small as possible on the premise that it is easy to read , To reduce the occupation of resources , about value You can use int Try not to use string, For less than N Of value,redis Internal shared_object cache . stay redis Use hash In the case of key The split , The same hash key Will fall into the same redis node ,hash Too large will lead to uneven distribution of memory and requests , Consider right hash Split into small hash, Make the node memory uniform to avoid single node request hotspots . To avoid nonexistent data requests , Cause each request to be cached miss Directly into the database , Set the empty cache . When objects need to be stored in the cache , Serialization try to use protobuf, Minimize data size . When adding new data, ensure that the cache must exist before adding , Use Expire To determine whether the cache exists . The need to store daily login scenarios , have access to BITSET, To avoid a single BITSET Too big or hot , Can be done sharding. In the use of sorted set When , Avoid using zrange perhaps zrevrange Returns a collection that is too large , High complexity . Try to use when caching PIPELINE, But also pay attention to avoid too large a collection . Avoid oversized value. Try to set the expiration time for the cache . Use the full operation command with caution , such as Hash Type of HGETALL、Set Type of SMEMBERS etc. , These operations will affect Hash and Set Full scan of the underlying data structure , If there is a large amount of data , It will block Redis The main thread . To get the full data of a collection type, you can use SSCAN、HSCAN Wait for the command to return the data in the set in batches , Reduce blocking of main threads . Use with caution MONITOR command ,MONITOR The command will continuously write the monitored content to the output buffer , If there are many online commands , The output buffer will overflow soon , Would be right Redis Performance impact . The production environment is disabled KEYS、FLUSHALL、FLUSHDB Wait for the order .
Conclusion
This article introduces how to use local hotspot cache to deal with ultra-high requests , Hotspot cache is divided into known hotspot cache and unknown hotspot cache . Known hotspot caches are simple , Load from the database into memory in advance , Unknown hotspot cache we need to adaptively identify the hotspot data , Then upgrade the data of these hotspots to local cache . Finally, some tips for caching in actual production are introduced , Be flexible in the production environment and try to avoid problems .
I hope this article can help you , thank you .
Every Monday 、 Thursday update
Code warehouse : https://github.com/zhoushuguang/lebron
Project address
https://github.com/zeromicro/go-zero
Welcome to use go-zero and star Support us !
WeChat ac group
Focus on 『 Microservice practice 』 Official account and click Communication group Get community group QR code .
边栏推荐
- 0. grpc环境搭建
- 5-2Web应用程序漏洞扫描
- The client can connect to remote MySQL
- Laravel 创建自己的 Facade 扩展 geoip 根据 IP 获取国家、地域、城市信息
- 联通入库|需要各地联通公司销售其产品的都需要先入总库
- Free PDF to word software sharing, these software must know!
- 软件快速交付真的需要以安全为代价吗?
- Live broadcast platform development, enter the visual area to execute animation, dynamic effects and add style class names
- Underlying principles of file operations (file descriptors and buffers)
- MySQL 锁常见知识点&面试题总结
猜你喜欢

AI scene Storage Optimization: yunzhisheng supercomputing platform storage practice based on juicefs

Touch key and key control corresponding LED status reversal

Deep parsing of kubernetes controller runtime

Three development trends of enterprise application viewed from the third technological revolution

读书郎上市背后隐忧:业绩下滑明显,市场地位较靠后,竞争力存疑

联通入库|需要各地联通公司销售其产品的都需要先入总库

科大讯飞 AI 学习机暑期新品发布会 AI + 教育深度结合再创产品新高度

What are the software testing methods and technical knowledge points?

Code sharing for making and developing small programs on the dating platform

中国数据库崛起,阿里云李飞飞:中国云数据库多种主流技术创新已领先国外
随机推荐
阶段性总结与思考
深入解析kubernetes controller-runtime
5-1系统漏洞扫描
Day9 - user registration and login
The soft youth under the blessing of devcloud makes education "smart" in the cloud
联通入库|需要各地联通公司销售其产品的都需要先入总库
从第三次技术革命看企业应用三大开发趋势
js函数相关的复习
Phpspreadsheet reading and writing Excel files
软件快速交付真的需要以安全为代价吗?
Dynamic planning learning (continuous update)
英语没学好到底能不能做coder,别再纠结了先学起来
Just like our previous views on the Internet, our understanding of the Internet began to become deeper
Laravel creates its own facade extension geoip to obtain country, region and city information according to IP
Day9 ---- 用户注册与登录
qt5.14.2连接ubuntu20.04的mysql数据库出错
Kubernetes architecture that novices must know
这个flink cdc可以用在做oracle到mysql的,增量同步吗
Ansible自动化运维
科大讯飞 AI 学习机暑期新品发布会 AI + 教育深度结合再创产品新高度