当前位置:网站首页>Project -- high concurrency memory pool
Project -- high concurrency memory pool
2022-07-03 15:52:00 【whole life】
Study C++ It's been a long time , Today we are going to learn a project , The project is based on Google tc-malloc, Let's meet this project with excellent design
Demand analysis
Our project uses the pooling technology in the design pattern , Optimized on the traditional memory pool , The memory pool can effectively improve the efficiency of memory application and solve the problem of memory fragmentation
Advantages and disadvantages of ordinary memory pool
Let's first review the main idea of memory pool , Is to open up a large block of memory in advance , When our program needs to use memory , Take a piece directly from the large memory , It can improve the efficiency of applying for release , Instead of going to new/malloc Request memory from the heap
advantage : Increase of efficiency , Solve some memory fragmentation problems
shortcoming : Unable to deal with the lock contention problem in applying for memory at high concurrency , This problem will reduce efficiency
Our memory pool solves these problems
Main design ideas
The overall framework of high concurrency memory pool is composed of the following three parts , The functions of each part are as follows :
Thread cache (thread cache): Each thread has its own thread cache , It mainly solves the lock competition between threads in high concurrency running scenarios under multithreading . The thread cache module can provide threads with less than 64k Memory allocation , And multiple threads run concurrently without locking .
Central control cache (central control cache): Central control cache just as the name suggests , It is the central structure of the high concurrency memory pool, which is mainly used to control the memory scheduling problem . Be responsible for cutting large memory, allocating it to the thread cache, reclaiming the excess memory in the thread cache, merging and returning it to the page cache , Achieve the purpose of more balanced on-demand scheduling of memory allocation in multiple threads , It plays a connecting role in the whole project .( Be careful : We need locks here , When multiple threads apply to the central control cache or return memory at the same time, there is a thread safety problem , But this rarely happens , It will not have a great impact on the efficiency of the program , On the whole, the advantages outweigh the disadvantages )
Page caching (page cache): Apply for memory in pages , Provide large blocks of memory for the central control cache . When there is no memory object in the central control cache , It can be downloaded from page cache Get large blocks of memory on demand in pages , meanwhile page cache And recycle central control cache The memory of is merged to alleviate the memory fragmentation problem .
So let's go from 0 Start implementing this project
thread cache Thread cache
First , Let's think about it , What is the main purpose of our memory pool ? Memory can be allocated to different threads at the same time , Based on this , Refer to our memory pool design , We designed a hash bucket whose members are free bidirectional linked lists as thread_cache The main structure of
private:FreeList _freeLists[NFREELISTS];// The size of the free linked list is NFREELISTSThen we need to implement the bidirectional linked list
FreeList class
Since we are hanging on the hash bucket , Use this linked list to store memory blocks , So we need 1. Head node pointer .2. The number of nodes under the hash bucket .
private:void* _head = nullptr;size_t _size = 0;We think of , When a thread needs memory , We return the memory of the corresponding size to the thread , So we were born
// Head deletion void* Pop()// Return a copy of memory to the thread {void* obj = _head;_head = NextObj(_head);_size -= 1;return obj;}We need to be right here NextObj(_head) To illustrate , This function actually replaces the linked list next Born , The following is his specific implementation
inline void*& NextObj(void* obj)//obj Next node of {return *((void**)obj);// We will obj Before 4/8 Bytes are used to store the address of the next node , return obj Before 4/8 Bytes // It is equivalent to returning the next node , Here we use the right void** Quoting , Because of the use void** To really extract a memory pair // Elephant following 32 position /64 Before taking out the different machines 4/8 Bytes ( Explain void** Just look at void* Size , and void* The size of is 32 Next is 4,64 Bit is 8), notes :void* Generally speaking, it cannot be dereferenced (Linux I can ), however void** Sure }With the memory passed to the thread , Then a thread will return the exhausted memory to the memory pool , Feel free to insert our linked list
// Head insertion void Push(void* obj){NextObj(obj) = _head;_head = obj;_size += 1;}And some basic functions
bool Empty(){return _head == nullptr;}size_t MaxSize(){return _max_size;}void SetMaxSize(size_t n){_max_size = n;}size_t Size(){return _size;}
边栏推荐
- 深度学习之三维重建
- Driver and application communication
- App移动端测试【3】ADB命令
- Introduction series of software reverse cracking (1) - common configurations and function windows of xdbg32/64
- 自定义注解
- Concurrency-01-create thread, sleep, yield, wait, join, interrupt, thread state, synchronized, park, reentrantlock
- CString中使用百分号
- Secsha system 1- login function
- Visual upper system design and development (Halcon WinForm) -1 Process node design
- 从 flask 服务端代码自动生成客户端代码 -- flask-native-stubs 库介绍
猜你喜欢

深度学习之三维重建

Popular understanding of ovo and ovr

百度智能云助力石嘴山市升级“互联网+养老服务”智慧康养新模式

Reading notes of "micro service design" (Part 2)

nifi从入门到实战(保姆级教程)——flow

嵌入式开发:避免开源软件的7个理由

Please be prepared to lose your job at any time within 3 years?

突破100万,剑指200万!

GCC cannot find the library file after specifying the link library path

Popular understanding of gradient descent
随机推荐
Summary of JVM knowledge points
VS2017通过IP调试驱动(双机调试)
VC下Unicode和ANSI互转,CStringW和std::string互转
Under VC, Unicode and ANSI are converted to each other, cstringw and std:: string are converted to each other
关于网页中的文本选择以及统计选中文本长度
UnityShader——MaterialCapture材质捕捉效果 (翡翠斧头)
Popular understanding of decision tree ID3
Popular understanding of linear regression (I)
Persisting in output requires continuous learning
请做好3年内随时失业的准备?
《微服务设计》读书笔记(上)
Microservice API gateway
Popular understanding of random forest
Microservice - fuse hystrix
Secsha system 1- login function
Jvm-03-runtime data area PC, stack, local method stack
Microservice - Nacos registration center and configuration center
Automatic generation of client code from flask server code -- Introduction to flask native stubs Library
软件安装信息、系统服务在注册表中的位置
Microservices Seata distributed transactions