当前位置:网站首页>Principle and usage setting of large page memory
Principle and usage setting of large page memory
2022-07-29 01:07:00 【qq_ forty-two million five hundred and thirty-three thousand tw】
The principle of improving performance by the size of memory pages
First , We need to review a small part of the principles of computer composition , This is important for understanding large memory paging JVM Performance improvement is beneficial .
What is memory paging ?
We know ,CPU Memory is accessed through addressing .32 position CPU The addressing width of is 0~0xFFFFFFFF , The calculated size is 4G, In other words, the maximum supported physical memory is 4G.
But in practice , Encountered such a problem , The program needs to use 4G Memory , The available physical memory is less than 4G, Cause the program to have to reduce the memory consumption .
In order to solve such problems , modern CPU Introduced MMU(Memory Management Unit Memory management unit ).
MMU The core idea of is to use virtual address instead of physical address , namely CPU Use virtual addresses when addressing , from MMU Responsible for mapping virtual addresses to physical addresses .
MMU The introduction of , Solved the limitation of physical memory , For the program , It's like you're using it 4G Memory is the same .
paging (Paging) It's using MMU On the basis of , A memory management mechanism is proposed . It keeps the virtual address and physical address in a fixed size (4K) Split into pages (page) And page frames (page frame), And make sure the page is the same size as the page frame .
Such mechanism , In terms of data structure , Ensures efficient access to memory , And make OS Can support discontinuous memory allocation .
When the program memory is not enough , You can also transfer infrequently used physical memory pages to other storage devices , For example, disk. , This is the familiar virtual memory .
Mentioned above , Virtual address and physical address need to be mapped , Can we make CPU Normal work .
Mapping requires storing mapping tables . In modern times CPU Architecture , The mapping relationship is usually stored in physical memory, which is called a page table (page table) The place of .
Here's the picture :

From this picture , Can be seen clearly CPU And page table , The interaction between physical memory .
Further optimization , introduce TLB(Translation lookaside buffer, Page table register buffer )
It can be seen from the previous section that , Page tables are stored in memory . We know CPU Access memory via bus , It must be slower than directly accessing registers .
To further optimize performance , modern CPU Architecture introduces TLB, It is used to cache some frequently accessed page table contents .
Here's the picture :

contrast 9.6 That picture , Add... In the middle TLB.
Why support large memory paging ?
TLB It is limited. , There is no doubt about that . When exceeding TLB When the storage limit of , It will happen TLB miss, after ,OS Will command CPU To access the page table in memory . If it happens frequently TLB miss, The performance of the program will decline quickly .
In order to make TLB You can store more page address mapping relationships , Our approach is to increase the size of memory pages .
If a page 4M, Compare a page 4K, The former can make TLB More storage 1000 Page address mapping relationship , The performance improvement is considerable .
Set large page memory in virtual machine :
echo 1024 > /proc/sys/vm/nr_hugepages 2G Memory settings for 256,4G Memory settings 512.
边栏推荐
- 面试突击69:TCP 可靠吗?为什么?
- Wechat campus bathroom reservation for the finished product of applet graduation design (7) mid term inspection report
- Asynchronous mode worker thread
- MySQL stored procedure realizes the creation of a table (copy the structure of the original table and create a new table)
- [Jenkins' notes] introduction, free space; Continuous integration of enterprise wechat; Allure reports, continuous integration of email notifications; Build scheduled tasks
- 指令重排、happens-before、as-if-serial
- B+ 树 ~
- iNFTnews | 元宇宙购物体验将成为吸引消费者的一大利器
- Interview shock 69: is TCP reliable? Why?
- 【AD学习】本次海上航行器大赛画pcb图的历程
猜你喜欢

Summary of preprocessing methods for time series data

How to explain JS' bind simulation implementation to your girlfriend

Huawei releases harmonyos 3.0, taking another step towards "Internet of all things"

如何在WordPress中创建一个自定义404错误页面

进程和线程知识点总结 2

Deep learning | matlab implementation of TCN time convolution neural network spatialdropoutlayer parameter description
![[AD learning] the course of PCB drawing in this marine vehicle competition](/img/37/211a0557848f6922fda7a69a114923.png)
[AD learning] the course of PCB drawing in this marine vehicle competition

QT静态编译程序(Mingw编译)
![Cloud function realizes website automatic check-in configuration details [web function /nodejs/cookie]](/img/e3/496247afdb3ea5b9a9cdb8afb0d41b.png)
Cloud function realizes website automatic check-in configuration details [web function /nodejs/cookie]

mysql分表之后怎么平滑上线?
随机推荐
【刷题笔记】链表内指定区间反转
一文让你搞懂MYSQL底层原理。-内部结构、索引、锁、集群
“index [hotel/jXLK5MTYTU-jO9WzJNob4w] already exists“
Station B "crashed" from beginning to end 2021.07.13 we collapsed like this (Reprint)
Implement Lmax disruptor queue from scratch (VI) analysis of the principle of disruptor solving pseudo sharing and consumers' elegant stopping
The digitalization of the consumer industry is upgraded to "rigid demand", and weiit's new retail SaaS empowers enterprises!
Main thread and daemon thread
Yield Guild Games:这一年的总结与未来展望
Seven marketing strategies of NFT project
day8
mysql存储过程 实现创建一张表(复制原表的结构新建的表)
Selenium wire obtains Baidu Index
Dart array, map, type judgment, conditional judgment operator, type conversion
In the second round, 1000 okaleido tiger were sold out in one hour after logging in to binance NFT again
Definition of double linked list~
追踪伦敦银实时行情的方法有哪些?
(完美解决)为什么在train/val/test数据集上用train模式效果都很好,但是在eval模式下全部很差
DDD领域驱动设计如何进行工程化落地
Daniel guild Games: summary and future outlook of this year
进程和线程知识点总结 2