当前位置:网站首页>[glide] cache implementation - memory and disk cache
[glide] cache implementation - memory and disk cache
2022-07-04 17:00:00 【Jkwen2022】
brief introduction
This article is mainly a preliminary understanding of Glide Cache mode and implementation of , It is mainly divided into the following parts :
- Glide Cache mode of
- Principle of memory and disk cache
- Specific implementation of cache algorithm
Glide Cache mode of
Glide There are two main cache methods , Memory cache and disk cache . Memory cache , Depending on whether the data is currently in use , And then you can subdivide it into Active and Not Active. Disk cache , According to whether deformation has been carried out , And then you can subdivide it into Resource Cache「 Refers to the cache after deformation 」 and Data Cache「 Refers to the source data cache 」. Here is a brief introduction to memory and disk cache .
//GlideBuilder.java
@NonNull
Glide build(@NonNull Context context) {
// Omitted code ...
// Implementation of memory cache
if (memoryCache == null) {
memoryCache = new LruResourceCache(memorySizeCalculator.getMemoryCacheSize());
}
// Factory implementation of disk cache
if (diskCacheFactory == null) {
diskCacheFactory = new InternalCacheDiskCacheFactory(context);
}
if (engine == null) {
engine =
new Engine(
memoryCache,
diskCacheFactory,
diskCacheExecutor,
sourceExecutor,
GlideExecutor.newUnlimitedSourceExecutor(),
animationExecutor,
isActiveResourceRetentionAllowed);
}
// Omitted code ...
return new Glide(
context,
engine,
memoryCache,
bitmapPool,
arrayPool,
requestManagerRetriever,
connectivityMonitorFactory,
logLevel,
defaultRequestOptionsFactory,
defaultTransitionOptions,
defaultRequestListeners,
experiments);
}
/* memeory cache The implementation of LruResourceCache, After creation, they will be passed in as parameters Engine and Glide object . disk cache factory The implementation of InternalCacheDiskCacheFactory, After creation, it is only passed in as a parameter Engine object . The default disk cache finally created by the factory is DiskLruCacheWrapper. */
Principle of memory and disk cache
from LruResourceCache and DiskLruCacheWrapper In terms of implementation , Its cache principle adopts LRU Algorithm , Combined with the specific point of source code implementation ,
First, let's look at the principle of memory caching ,
public class LruResourceCache extends LruCache<Key, Resource<?>> implements MemoryCache {
//MemoryCache Defines the behavior of memory caching , What specific behaviors ?
/* 1. Cache operations :put(Key key, Resource<?> resource) 2. Delete operation :remove(Key key) 3. Emptying operation :clearMemory() */
//LruCache<K, V> yes LRU Implementation class of , Through internal LinkedHashMap Realize the maintenance of cached data ,
// This class has implemented put, get, remove, clear Wait for the operation , As subclasses ,
// To achieve getSize(Y item) Method to specify the size of a single cached data .LruCache The method implementation of this class is the key .
// As LruResourceCache In itself , There is not much left to do .
}
Let's look at the principle of disk caching ,
public class DiskLruCacheWrapper implements DiskCache {
//DiskCache Defines the behavior of disk caching , Same as MemeoryCache similar , It also defines some operation methods
/* 1. Cache operations :put(Key key, Writer writer) 2. Delete operation :delete(Key key) 3. Emptying operation :clear() 4. The retrieval :get(Key key), This step is in MemoryCache There is no definition in , But in LruCache Is defined and implemented . In addition to the above behavior operations , This interface also defines two interfaces , One is the factory interface for creating disk cache , One is used to cache read and write operations Writer. */
// Same as LruCache similar , This is a disk cache LRU Implementation class .
private DiskLruCache diskLruCache;
private synchronized DiskLruCache getDiskCache() throws IOException {
if (diskLruCache == null) {
//diskLruCache Object passing open Method creation
diskLruCache = DiskLruCache.open(directory, APP_VERSION, VALUE_COUNT, maxSize);
}
return diskLruCache;
}
}
DiskLruCache It will be better than memory cache LruCache More complicated , Performance in :1. Involving thread pool operation ,2. It involves file reading and writing . But although there are differences , emphasis ( use LinkedHashMap Implement cache collection ) It's the same .
Specific implementation of cache algorithm
Based on the analysis of Glide Before the cache algorithm is implemented , Let's have a brief understanding of LRU Algorithm .
LRU(Least Recently Used) This algorithm is a page replacement algorithm , Add a field for each page to record the usage time , When the storage space is full , Use the time field according to the comparison , Delete the least recently used pages , To store new pages .
In principle :1. Storage space is limited , That is to say, there is an initial value of storage space .2. Need a data structure to meet the content replacement , That is to find the least recently used elements .
LruCache Of put Method implementation analysis ,
public class LruCache<T, Y> {
private final Map<T, Entry<Y>> cache = new LinkedHashMap<>(100, 0.75f, true);
public LruCache(long size) {
// What the constructor does is to initialize the number of storage space elements ( That is, how many pages can be put in total , If it is full, it will be replaced )
// Be careful !!! This size Is the total size of storage space , The unit is and getSize It's consistent
/* Glide use MemorySizeCalculator Of memoryCacheSize Field to which it is assigned , Calculation of this value It will vary according to the screen size of the device . The unit is byte */
this.initialMaxSize = size;
this.maxSize = size;
}
@Nullable
public synchronized Y put(@NonNull T key, @Nullable Y item) {
// adopt getSize Method to get item The space size of the element , With byte In units of
final int itemSize = getSize(item);
if (itemSize >= maxSize) {
// The implementation of this is handed over to subclasses
onItemEvicted(key, item);
return null;
}
if (item != null) {
currentSize += itemSize;
}
// utilize LinkedHashMap Type of cache Deposit in item value
// Judge according to the return value cache Is there any old element in the that needs to be removed
@Nullable Entry<Y> old = cache.put(key,
item == null ? null : new Entry<>(item, itemSize));
if (old != null) {
currentSize -= old.size;
if (!old.value.equals(item)) {
onItemEvicted(key, old.value);
}
}
// Check whether there is enough space , If not enough, replace the element space
evict();
return old != null ? old.value : null;
}
private void evict() {
trimToSize(maxSize);
}
protected synchronized void trimToSize(long size) {
Map.Entry<T, Entry<Y>> last;
Iterator<Map.Entry<T, Entry<Y>>> cacheIterator;
while (currentSize > size) {
cacheIterator = cache.entrySet().iterator();
// I got it directly here cache The first element in ,
// the reason being that LinkedHashMap, In this way, the least used recently is the first
// Cycle release , Until there is enough space
last = cacheIterator.next();
final Entry<Y> toRemove = last.getValue();
currentSize -= toRemove.size;
final T key = last.getKey();
cacheIterator.remove();
onItemEvicted(key, toRemove.value);
}
}
// Static classes used to wrap storage elements , Contains specific elements and their space size
static final class Entry<Y> {
final Y value;
final int size;
@Synthetic
Entry(Y value, int size) {
this.value = value;
this.size = size;
}
}
}
DiskLruCache Of put Method implementation analysis ,
public final class DiskLruCache implements Closeable {
// Through InternalCacheDiskCacheFactory Default size specified when creating 250MB
private long maxSize;
private final LinkedHashMap<String, Entry> lruEntries =
new LinkedHashMap<String, Entry>(0, 0.75f, true);
public final class Editor {
private final Entry entry;
private final boolean[] written;
private boolean committed;
public File getFile(int index) throws IOException {
synchronized (DiskLruCache.this) {
if (entry.currentEditor != this) {
throw new IllegalStateException();
}
if (!entry.readable) {
written[index] = true;
}
File dirtyFile = entry.getDirtyFile(index);
directory.mkdirs();
return dirtyFile;
}
}
}
}
// On disk cache processing , Behavior operation is through DiskLruCacheWrapper Object operation , Take a look DiskLruCacheWrapper class
//DiskLruCacheWrapper
public class DiskLruCacheWrapper implements DiskCache {
private DiskLruCache diskLruCache;
@Override
public void put(Key key, Writer writer) {
//SafeKeyGenerator Make sure you can get one safeKey
String safeKey = safeKeyGenerator.getSafeKey(key);
try {
try {
DiskLruCache diskCache = getDiskCache();
// adopt diskCache obtain safeKey Corresponding value
Value current = diskCache.get(safeKey);
if (current != null) {
return;
}
// obtain safeKey Corresponding editor
DiskLruCache.Editor editor = diskCache.edit(safeKey);
try {
File file = editor.getFile(0);
// adopt Writer Write file data to file in
if (writer.write(file)) {
// If the write is successful , Is executed commit
editor.commit();
}
}
}
}
}
DiskLruCache Than LruCache The implementation of will be more complicated , For the time being, let's have a brief understanding , Further analysis will be carried out later .
边栏推荐
- Hash table
- DC-2靶场搭建及渗透实战详细过程(DC靶场系列)
- GO开发:如何利用Go单例模式保障流媒体高并发的安全性?
- What is torch NN?
- 基于check-point机制的任务状态回滚和数据分块任务
- Hair growth shampoo industry Research Report - market status analysis and development prospect forecast
- 祝贺Artefact首席数据科学家张鹏飞先生荣获 Campaign Asia Tech MVP 2022
- Four point probe Industry Research Report - market status analysis and development prospect prediction
- The winning rate against people is 84%, and deepmind AI has reached the level of human experts in army chess for the first time
- Task state rollback and data blocking tasks based on check point mechanism
猜你喜欢
Capvision Rongying's prospectus in Hong Kong was "invalid": it was strictly questioned by the CSRC and required supplementary disclosure
Application of clock wheel in RPC
从数数开始
The winning rate against people is 84%, and deepmind AI has reached the level of human experts in army chess for the first time
Understand ThreadLocal in one picture
Visual studio 2019 (localdb) mssqllocaldb SQL Server 2014 database version is 852 and cannot be opened. This server supports 782
What is torch NN?
The vscode waveform curve prompts that the header file cannot be found (an error is reported if the header file exists)
World Environment Day | Chow Tai Fook serves wholeheartedly to promote carbon reduction and environmental protection
Cut! 39 year old Ali P9, saved 150million
随机推荐
Hash table
Height residual method
Inside and outside: flow chart drawing elementary: six common mistakes
Sql实现Split
Market trend report, technical innovation and market forecast of taillight components in China
How can programmers improve the speed of code writing?
[Acwing] 58周赛 4489. 最长子序列
Accounting regulations and professional ethics [10]
Opencv learning -- arithmetic operation of image of basic operation
[Chongqing Guangdong education] National Open University spring 2019 1248 public sector human resource management reference questions
Hair and fuzz interceptor Industry Research Report - market status analysis and development prospect forecast
Configuration instance of Oracle listener server and client
Research Report on market supply and demand and strategy of tetramethylpyrazine industry in China
China's roof ladder market trend report, technological innovation and market forecast
Application of clock wheel in RPC
Interface fonctionnelle, référence de méthode, Widget de tri de liste implémenté par lambda
安信证券排名 网上开户安全吗
Software Engineer vs Hardware Engineer
高度剩余法
Accounting regulations and professional ethics [11]