当前位置:网站首页>[mit 6.s081] LEC 8: page faults notes
[mit 6.s081] LEC 8: page faults notes
2022-07-27 18:27:00 【PeakCrosser】
Lec 8: Page faults
- Ref: https://github.com/huihongxiao/MIT6.S081/tree/master/lec08-page-faults-frans
- Preparation: xv6 book Chapter 4 Section 4.6
Page fault summary
Features using page faults
- lazy allocation, Lazy allocation
- COW, copy-on-write fork, When writing copy
- demand paging, Page on demand
- memory mapped files, mmp, Memory mapped files
Advantages of virtual memory
- Isolation, Isolation, . Kernel and Application , Address space isolation between applications .
- level of indirection, Indirectness , abstract . Virtual address to physical address mapping .
- trampoline page: One physical address maps to multiple user address spaces
- guard page: Protect the stack in kernel space and user space , Unmapped physical address
Kernel response page fault Required information
- the faulting va Wrong Virtual address , Trigger page fault Source . XV6 appear page fault Will save the wrong virtual address to STVAL In the register .
- the type of page fault The type of error . SCAUSE register ( Save the trap The reason for entering kernel mode ) There are many reasons related to page fault relevant : read , Write , Instructions . page fault Also use trap The mechanism switches from user space to kernel space , Will be set STVAL Registers and SCAUSE register .
- the va of instruction that caused the fault Trigger the wrong Instruction address . As trap Deal with part of the code , Are saved in SEPC Registers and
trapframe->epcin .
Lazy Allocation
- XV6 Pass through
sbrk()Function to increase or decrease the memory address space , The operation is the address space on the heap , Withp->sz( The dividing line between heap and stack ) As a starting point - XV6 China and Murdoch think eager allocation( Hunger distribution ), Once called, the kernel immediately allocates physical memory . Even if some memory is never used, it will be allocated , It's a waste of memory .
- lazy allocation.
sbrk()Do not actually allocate physical memory but only increasep->sz + n(nBytes added for ). By triggering page fault Then allocate physical memory ( Distribute 1 page -> take page Set the content to zero -> Map to user page table -> Execute the command again ). When releasing the process page table, we should also consider skipping the processing of the virtual memory that is not actually allocated .
Zero Fill On Demand
User program address space
The user program address space has : text , Store instructions ; data Data segment , Store the initialized global ( static state ) Variable ; BSS paragraph , Storage is not initialized or initialized to 0 Overall situation ( static state ) Variable .
BSS Fill in zero as needed
about BSS paragraph , Because their initial values are 0, Therefore, it is not necessary to actually allocate physical memory for it, but to remember that its initial value is 0 that will do .
So for BSS paragraph , No matter how many virtual pages its virtual address occupies , Only practical Assign a content that is all 0 The physical page of , Make all virtual pages map to the physical page .
This whole 0 Physical pages are read-only , It will be triggered when writing to variables page fault, A new readable and writable physical page will be reassigned , Set the initial value to 0, And perform a write operation .
Fill in zero as needed, advantages and disadvantages
- advantage
- similar lazy allocation, Save memory .
- send
execLess work is needed , Make the program start faster .
- shortcoming
- It just postponed some memory allocation operations to processing page fault when , And because it will trigger trap Enter the kernel , Therefore, there will be additional storage overhead and performance overhead .
Copy-On-Write fork
in consideration of fork() Then it may call exec() Program that loads rows , Will release the memory of the child process created .
therefore fork() It is not to allocate physical memory for the child process and copy the contents of the address space of the parent process ; Instead, let the child process share the physical memory of the parent process , Put the child process's PTE Execute the physical memory page corresponding to the parent process .
Initially, you need to Father son process PTE Are set to read-only , If one of them wants to modify the memory content, it will cause page fault, New physical memory pages allocated to child processes , Copy relevant virtual pages to physical pages , And new and old at this time PTE Both become readable and writable to parent process or child process respectively .
trigger page fault when , The kernel should be able to distinguish the current write operation to read-only memory from the scenario of copy on write . Generally, it needs the support of hardware , XV6 in PTE Yes 2 Bits reserved , The kernel can use one of these bits as COW Sign a .
because COW In this situation , Physical memory may correspond to virtual memory of multiple processes , When all processes exit, they cannot directly release the physical memory corresponding to their virtual memory , It is to judge whether the current physical memory is not used by other processes . therefore Reference counting of physical pages is required , The physical page reference count decreases when the virtual page is released 1, Only if the reference count is 0 Only when the physical page is actually released .
Demand Paging
Application's text Code segment , data Data segments can be large , There is no need to allocate enough physical memory for it at the beginning ; Instead, only virtual addresses are assigned , The corresponding PTE Invalid , Does not correspond to physical address .
When running program commands , May trigger page fault, At this time, it is assigning a physical address , Read code or data from program files , Then map to the user process page table , Finally, execute the instruction again .
For unused data segments or code segments, the actual physical memory will not be allocated .
The solution to memory exhaustion : Recall some physical pages , Write its contents back to the file system , Release the current physical page . Generally LRU Wait for the page elimination strategy . For dirty pages (dirty-page) And non dirty pages (non-dirty page), Generally choose non dirty pages , Because if the dirty page is modified later, it needs to be modified 2 operations , Once, the current modified content is written to the file , The other is to reload the page into memory when modifying again . Instead of dirty pages, you only need to change the current page's PTE Mark as invalid , No need to write back to memory , When it is reused, it can be reloaded from the file to memory .
PTE There is D The flag bit indicates whether the corresponding page is dirty ( It was written ). Yes A Marker bit (Access bit) Indicates whether the current page has been read or written , Need regular right A Mark for clearing , Based on this, the memory page is calculated to be LRU Ranking in .
Memory Mapped Files
Load the whole or part of the file into memory , Operate files by reading and writing memory related addresses .
The general operating system will provide mmap system call . The system call will accept the virtual address (va), length (len), protection, Some flags (flags), Open file descriptor (fd) And offset (offset). from fd Of the corresponding file offset Position start , Mapping length is len Content to virtual address va, Plus some protection, Such as read-only or read-write .
Generally, files are loaded lazily , adopt Virtual Memory Area, VMA Structure records relevant information . adopt page fault Map the actual file contents to memory ; adopt unmap The system call writes the dirty pages of the file map back to the file .
边栏推荐
- Multi thread import data and generate error files for redis storage
- Deep learning: stgcn learning notes
- [MIT 6.S081] Lec 9: Interrupts 笔记
- Localization within Communities
- 国巨斥资18亿美元收购竞争对手Kemet,交易或在明年下半年完成
- Exciting collection of new features released by salesforce
- 浅论分布式训练中的recompute机制
- 华为Mate30 Pro 5G拆解:自研芯片占比过半,美系芯片依然存在!
- Golang Chan implements mutual exclusion
- Jianan Yunzhi has completed the pre roadshow and is expected to land on NASDAQ on November 20
猜你喜欢
随机推荐
Technology sharing | quick intercom integrated dispatching system
@Datetimeformat received less than minutes and seconds, conversion times type exception
Three consecutive high-frequency interview questions of redis online celebrity: cache penetration? Cache breakdown? Cache avalanche?
Deep learning: installation package records
Jrs-303 usage
[MIT 6.S081] Lec 3: OS organization and system calls 笔记
记一次 .NET 某智慧物流 WCS系统 CPU 爆高分析
深度学习:GCN图分类案例
【学习笔记】MySQL数据库高级版 - 索引优化、慢查询、锁机制等
[MIT 6.S081] Lec 10: Multiprocessors and locking 笔记
OEM "made in the United States", domestic security equipment has been installed on the U.S. aircraft carrier!
Chained storage structure of dynamic linked list 3 queue (linkedqueue Implementation)
邮件安全运营难?Coremail携手云商店打造企业邮箱办公新生态!
深度学习:GAN案例练习-minst手写数字
同心向前,Google Play 十周年啦!
GIS数据漫谈(五)— 地理坐标系统
Mysql四种锁
联发科首款5G SoC来了!A77+G77+APU3.0,11月26日正式发布!
英特尔发布新一代Movidius VPU:性能提升10倍,功耗仅30W
华为Mate30 Pro 5G拆解:自研芯片占比过半,美系芯片依然存在!

![[MIT 6.S081] Lec 6: Isolation & system call entry/exit 笔记](/img/b3/89b3688a06aa39d894376d57acb2af.png)

![[MIT 6.S081] Lab 5: xv6 lazy page allocation](/img/f6/8b619412bc6ba425d0f04629917e80.png)

![[MIT 6.S081] Lec 10: Multiprocessors and locking 笔记](/img/62/ca6362830321feaf450865132cdea9.png)

![[MIT 6.S081] Lab 11: networking](/img/9d/cca59a662412f3c3c57c26c5987a24.png)

