当前位置:网站首页>Do you know all four common cache modes?
Do you know all four common cache modes?
2022-07-07 18:57:00 【JAVA Chinese community】
summary
In the system architecture , Caching is one of the easiest ways to provide system performance , Students with a little development experience will inevitably deal with caching , At least I have practiced .
If used properly , Caching can reduce response time 、 Reduce database load and save cost . But if the cache is not used properly , There may be some inexplicable problems .
In different scenarios , The caching strategy used also varies . If in your impression and experience , Caching is just a simple query 、 update operation , Then this article is really worth learning .
ad locum , Explain systematically for everyone 4 Three cache modes and their usage scenarios 、 Process and advantages and disadvantages .
Selection of caching strategy
In essence , Caching strategy depends on data and data access patterns . let me put it another way , How data is written and read .
for example :
Does the system write more and read less ?( for example , Time based logging )
Whether the data is written only once and read many times ?( for example , User profile )
Is the returned data always unique ?( for example , Search for )
Choosing the right caching strategy is the key to improving performance .
There are five common cache strategies :
Cache-Aside Pattern: Bypass caching mode
Read Through Cache Pattern: Read penetration mode
Write Through Cache Pattern: Write through mode
Write Behind Pattern: Also called Write Back, Asynchronous cache write mode
The above cache strategy is divided based on the data reading and writing process , Under some caching strategies, the application only interacts with the cache , Under some caching strategies, applications interact with caches and databases at the same time . Because this is an important dimension of strategy division , Therefore, you need to pay special attention to the following process learning .
Cache Aside
Cache Aside Is the most common caching mode , Applications can talk directly to caches and databases .Cache Aside It can be used for read and write operations .
Read operations Flow chart of :

The process of reading operation :
The application receives a data query ( read ) request ;
Whether the data that the application needs to query is in the cache :
If there is (Cache hit), Query the data from the cache , Go straight back to ;
If it doesn't exist (Cache miss), Then retrieve data from the database , And stored in the cache , Return result data ;
Here we need to pay attention to an operation boundary , That is, the database and cache operations are directly operated by the application .
Write operations Flow chart of :

The write operation here , Including the creation of 、 Update and delete . When writing operations ,Cache Aside The pattern is to update the database first ( increase 、 Delete 、 Change ), Then delete the cache directly .
Cache Aside Patterns can be said to apply to most scenarios , Usually in order to deal with different types of data , There are also two strategies to load the cache :
Load cache when using : When you need to use cached data , Query from the database , After the first query , Subsequent requests get data from the cache ;
Preload cache : Preload the cache information through the program at or after the project starts , such as ” National Information 、 Currency information 、 User information , News “ Wait for data that is not often changed .
Cache Aside It is suitable for reading more and writing less , For example, user information 、 News reports, etc , Once written to the cache , Almost no modification . The disadvantage of this mode is that the cache and database double write may be inconsistent .
Cache Aside It is also a standard model , image Facebook This mode is adopted .
Read Through
Read-Through and Cache-Aside Very similar , The difference is that the program doesn't need to focus on where to read data ( Cache or database ), It only needs to read data from the cache . Where the data in the cache comes from is determined by the cache .
Cache Aside The caller is responsible for loading the data into the cache , and Read Through The cache service itself will be used to load , So it is transparent to the application side .Read-Through Its advantage is to make the program code more concise .
This involves the application operation boundary problem we mentioned above , Look directly at the flow chart :

In the above flow chart , Focus on the operations in the dotted box , This part of the operation is no longer handled by the application , Instead, the cache handles it itself . in other words , When an application queries a piece of data from the cache , If the data does not exist, the cache will load the data , Finally, the cache returns the data results to the application .
Write Through
stay Cache Aside in , The application needs to maintain two data stores : A cache , A database . This is for applications , It's a little cumbersome .
Write-Through In mode , All writes are cached , Every time you write data to the cache , The cache will persist the data to the corresponding database , And these two operations are completed in one transaction . therefore , Only if you succeed in writing twice can you finally succeed . The downside is write latency , The benefit is data consistency .
It can be understood as , Applications think that the back end is a single storage , And storage itself maintains its own Cache.
Because the program only interacts with the cache , Coding will become simpler and cleaner , This becomes especially obvious when the same logic needs to be reused in multiple places .

When using Write-Through when , Generally, it is used together Read-Through To use .Write-Through The potential use scenario for is the banking system .
Write-Through Applicable cases are :
You need to read the same data frequently
Can't stand data loss ( relative Write-Behind for ) Inconsistent with the data
In the use of Write-Through Special attention should be paid to the effectiveness management of cache , Otherwise, a large amount of cache will occupy memory resources . Even valid cache data is cleared by invalid cache data .
Write-Behind
Write-Behind and Write-Through stay ” The program only interacts with the cache and can only write data through the cache “ This aspect is very similar . The difference is Write-Through The data will be written into the database immediately , and Write-Behind After a while ( Or triggered by other ways ) Write the data together into the database , This asynchronous write operation is Write-Behind The biggest feature .
Database write operations can be done in different ways , One way is to collect all write operations and at a certain point in time ( For example, when the database load is low ) Batch write . Another way is to merge several write operations into a small batch operation , Then the cache collects write operations and writes them in batches .
Asynchronous write operations greatly reduce request latency and reduce the burden on the database . At the same time, it also magnifies the inconsistency of data . For example, someone directly queries data from the database at this time , But the updated data has not been written to the database , At this time, the queried data is not the latest data .
Summary
Different caching modes have different considerations and characteristics , According to the different scenarios of application requirements , You need to choose the appropriate cache mode flexibly . In the process of practice, it is often a combination of multiple modes .

Previous recommendation Ensure the safety of interface data 10 Kind of plan
Talk about asynchronous programming 7 Type of implementation
Ali ' ,SpringBoot Automated Deployment artifact !

边栏推荐
- PTA 1102 teaching Super Champion volume
- pip相关命令
- Introduction of common API for socket programming and code implementation of socket, select, poll, epoll high concurrency server model
- 低代码助力企业数字化转型会让程序员失业?
- NAT地址转换
- 虚拟数字人里的生意经
- “解密”华为机器视觉军团:华为向上,产业向前
- 嵌入式C语言程序调试和宏使用的技巧
- Simple configuration of single arm routing and layer 3 switching
- Complete e-commerce system
猜你喜欢

小试牛刀之NunJucks模板引擎

直播预约通道开启!解锁音视频应用快速上线的秘诀

Hash, bitmap and bloom filter for mass data De duplication

數據驗證框架 Apache BVal 再使用

Disk storage chain B-tree and b+ tree

Learn to make dynamic line chart in 3 minutes!

你真的理解粘包与半包吗?3分钟搞懂它

RISCV64

6. About JWT

Skills of embedded C language program debugging and macro use
随机推荐
Basic concepts and properties of binary tree
sqlite sql 异常 near “with“: syntax error
Rules for filling in volunteers for college entrance examination
RISCV64
數據驗證框架 Apache BVal 再使用
现货白银分析中的一些要点
Nunjuks template engine
Summary of debian10 system problems
Idea completely uninstalls installation and configuration notes
Desci: is decentralized science the new trend of Web3.0?
小试牛刀之NunJucks模板引擎
卖空、加印、保库存,东方甄选居然一个月在抖音卖了266万单书
Kirk borne's selection of learning resources this week [click the title to download directly]
Redis
[paper sharing] where's crypto?
I feel cheated. Wechat tests the function of "size number" internally, and two wechat can be registered with the same mobile number
Charles+drony的APP抓包
Unlike the relatively short-lived industrial chain of consumer Internet, the industrial chain of industrial Internet is quite long
The performance and efficiency of the model that can do three segmentation tasks at the same time is better than maskformer! Meta & UIUC proposes a general segmentation model with better performance t
Cloud security daily 220707: Cisco Expressway series and telepresence video communication server have found remote attack vulnerabilities and need to be upgraded as soon as possible


