当前位置:网站首页>Go zero micro Service Practice Series (VII. How to optimize such a high demand)
Go zero micro Service Practice Series (VII. How to optimize such a high demand)
2022-06-29 22:44:00 【Microservice practice】
In the first two articles, we introduced various best practices for cache usage , First, we introduce the basic postures used by cache , How to use go-zero How to write the cache code in the automatically generated cache and logic code , Then I explained the penetration in the face of cache 、 breakdown 、 Solutions to common problems such as avalanches , Finally, it focuses on how to ensure cache consistency . Because caching is very important for high concurrency services , So we will continue to learn about caching in this article .
Local cache
When we encounter extreme hot data queries , At this time, local cache should be considered . The hotspot local cache is mainly deployed in the code of the application server , Used to block hotspot queries for Redis Equally distributed cache or database pressure .
In our mall , home page Banner There will be some advertising products or recommended products in the , The information of these commodities is entered and changed by the operation in the management background . The demand for these goods is very large , Even if it's Redis It's hard to carry , So here we can use local cache to optimize .

stay product Create a commodity operation table in the warehouse product_operation, For simplicity, keep only the necessary fields ,product_id To promote the goods of the operation id,status Is the status of the operating commodity ,status by 1 It will be on the home page Banner Show the product in .
CREATE TABLE `product_operation` (
`id` bigint unsigned NOT NULL AUTO_INCREMENT,
`product_id` bigint unsigned NOT NULL DEFAULT 0 COMMENT ' goods id',
`status` int NOT NULL DEFAULT '1' COMMENT ' Operation commodity status 0- Offline 1- go online ',
`create_time` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT ' Creation time ',
`update_time` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT ' Update time ',
PRIMARY KEY (`id`),
KEY `ix_update_time` (`update_time`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COMMENT=' Commodity operation table ';
The implementation of local caching is relatively simple , We can use map To do it on your own , stay go-zero Of collection Provided in Cache To realize the function of local cache , Let's just use it , It is never a wise choice to build the wheel repeatedly ,localCacheExpire Expiration time for local cache ,Cache Provides Get and Set Method , Very simple to use
localCache, err := collection.NewCache(localCacheExpire)
First look up from the local cache , If it hits the cache, it returns . If the cache is not hit, you need to query the operating bit commodity from the database first id, Then aggregate the product information , Finally, it is pushed back into the local cache . The detailed code logic is as follows :
func (l *OperationProductsLogic) OperationProducts(in *product.OperationProductsRequest) (*product.OperationProductsResponse, error) {
opProducts, ok := l.svcCtx.LocalCache.Get(operationProductsKey)
if ok {
return &product.OperationProductsResponse{Products: opProducts.([]*product.ProductItem)}, nil
}
pos, err := l.svcCtx.OperationModel.OperationProducts(l.ctx, validStatus)
if err != nil {
return nil, err
}
var pids []int64
for _, p := range pos {
pids = append(pids, p.ProductId)
}
products, err := l.productListLogic.productsByIds(l.ctx, pids)
if err != nil {
return nil, err
}
var pItems []*product.ProductItem
for _, p := range products {
pItems = append(pItems, &product.ProductItem{
ProductId: p.Id,
Name: p.Name,
})
}
l.svcCtx.LocalCache.Set(operationProductsKey, pItems)
return &product.OperationProductsResponse{Products: pItems}, nil
}
Use grpurl Debug tool request interface , First request cache miss after , Subsequent requests will hit the local cache , When the local cache expires, it will go back to the source again db Load data into local cache
~ grpcurl -plaintext -d '{}' 127.0.0.1:8081 product.Product.OperationProducts
{
"products": [
{
"productId": "32",
"name": " Electric fan 6"
},
{
"productId": "31",
"name": " Electric fan 5"
},
{
"productId": "33",
"name": " Electric fan 7"
}
]
}
Be careful , Not all information is suitable for local caching , The characteristic of local cache is that the request volume is very high , At the same time, certain inconsistencies can be allowed in business , Because the local cache generally does not take the initiative to update , You need to wait until it expires and go back to the source db Update later . Therefore, in the business, it depends on the situation to see whether the local cache is required .
Automatically identify hotspot data
home page Banner Scenarios are configured by operators , That is, we can know the hot data that may be generated in advance , But in some cases, we can not predict in advance that data will become a hot spot . So we need to be able to identify these hot data adaptively and automatically , Then promote the data to the local cache .
We maintain a sliding window , For example, the sliding window is set to 10s, It's about statistics 10s What's inside key Accessed by high frequency , One sliding window corresponds to multiple Bucket, Every Bucket Corresponding to one in map,map Of key For commodities id,value The number of requests corresponding to the product . Then we can do it regularly ( such as 10s) To count all the current Buckets Medium key The data of , Then import the data into the big top heap , It's easy to get from the big top pile topK Of key, We can set a threshold , For example, a certain time in a sliding window key Visit frequency exceeds 500 Time , Just think the key As a hot spot key, So that the key Upgrade to local cache .

Cache usage tips
Here are some tips for caching
key Try to make the naming easy to read , That is to say, you can see the name and know the meaning , The length should be as small as possible on the premise that it is easy to read , To reduce the occupation of resources , about value You can use int Try not to use string, For less than N Of value,redis Internal shared_object cache . stay redis Use hash In the case of key The split , The same hash key Will fall into the same redis node ,hash Too large will lead to uneven distribution of memory and requests , Consider right hash Split into small hash, Make the node memory uniform to avoid single node request hotspots . To avoid nonexistent data requests , Cause each request to be cached miss Directly into the database , Set the empty cache . When objects need to be stored in the cache , Serialization try to use protobuf, Minimize data size . When adding new data, ensure that the cache must exist before adding , Use Expire To determine whether the cache exists . The need to store daily login scenarios , have access to BITSET, To avoid a single BITSET Too big or hot , Can be done sharding. In the use of sorted set When , Avoid using zrange perhaps zrevrange Returns a collection that is too large , High complexity . Try to use when caching PIPELINE, But also pay attention to avoid too large a collection . Avoid oversized value. Try to set the expiration time for the cache . Use the full operation command with caution , such as Hash Type of HGETALL、Set Type of SMEMBERS etc. , These operations will affect Hash and Set Full scan of the underlying data structure , If there is a large amount of data , It will block Redis The main thread . To get the full data of a collection type, you can use SSCAN、HSCAN Wait for the command to return the data in the set in batches , Reduce blocking of main threads . Use with caution MONITOR command ,MONITOR The command will continuously write the monitored content to the output buffer , If there are many online commands , The output buffer will overflow soon , Would be right Redis Performance impact . The production environment is disabled KEYS、FLUSHALL、FLUSHDB Wait for the order .
Conclusion
This article introduces how to use local hotspot cache to deal with ultra-high requests , Hotspot cache is divided into known hotspot cache and unknown hotspot cache . Known hotspot caches are simple , Load from the database into memory in advance , Unknown hotspot cache we need to adaptively identify the hotspot data , Then upgrade the data of these hotspots to local cache . Finally, some tips for caching in actual production are introduced , Be flexible in the production environment and try to avoid problems .
I hope this article can help you , thank you .
Every Monday 、 Thursday update
Code warehouse : https://github.com/zhoushuguang/lebron
Project address
https://github.com/zeromicro/go-zero
Welcome to use go-zero and star Support us !
WeChat ac group
Focus on 『 Microservice practice 』 Official account and click Communication group Get community group QR code .
边栏推荐
- 正如以往我们对于互联网的看法一样,我们对于互联网的认识开始变得深度
- MySQL lock common knowledge points & summary of interview questions
- Why does copying files on a shared folder on a local area network (ERP server) result in the loss of the local Internet
- 5-2Web应用程序漏洞扫描
- 触摸按键与按键控制对应的LED状态翻转
- 文件操作的底层原理(文件描述符与缓冲区)
- Laravel creates its own facade extension geoip to obtain country, region and city information according to IP
- Can you be a coder if you don't learn English well? Stop worrying and learn it first
- wirehark数据分析与取证infiltration.pacapng
- 读书郎上市背后隐忧:业绩下滑明显,市场地位较靠后,竞争力存疑
猜你喜欢

What are the software testing methods and technical knowledge points?

从第三次技术革命看企业应用三大开发趋势

The soft youth under the blessing of devcloud makes education "smart" in the cloud

Gnawing down the big bone - sorting (I)

One click file sharing software jirafeau

VS2013如何让编写的程序在其它电脑上面也能运行

Vs2013 how to make the program run on other computers

服务器快速搭建AList集成网盘网站【宝塔面板一键部署AList】

Cloud native enthusiast weekly: cool collection of grafana monitoring panels

AI scene Storage Optimization: yunzhisheng supercomputing platform storage practice based on juicefs
随机推荐
详细聊聊MySQL中auto_increment有什么作用
Grep工具
软件测试方法和技术知识点有哪些?
Why does copying files on a shared folder on a local area network (ERP server) result in the loss of the local Internet
Conceptual understanding of deep learning (notes)
Underlying principles of file operations (file descriptors and buffers)
5-1系統漏洞掃描
在线文本数字识别列表求和工具
地方/园区如何做好产业分析?
Summary of basic concepts of moosefs
LeetCode85+105+114+124
Qt5.14.2 error connecting to the MySQL database of Ubuntu 20.04
mysql备份数据库linux
联通入库|需要各地联通公司销售其产品的都需要先入总库
还天天熬夜加班做报表?其实你根本不懂如何高效做报表
Laravel 创建自己的 Facade 扩展 geoip 根据 IP 获取国家、地域、城市信息
Basic use of Nacos configuration center
Can cdc2.2.1 listen to multiple PgSQL libraries at the same time?
深入解析kubernetes controller-runtime
0. grpc环境搭建