virtual memory full report
#1

[attachment=2500]

Virtual memory
¢ Virtual memory is an illusion of a memory that is larger than the real memory
“ Only some parts of a process are loaded in memory, other parts are stored in a disk area called swap space and loaded only when needed
“ It is implemented using noncontiguous memory allocation
* The memory management unit (MMU) performs address translation.
“ The virtual memory handler (VM handler) is that part of the kernel which manages virtual memory
Overview of virtual memory
¢ Memory allocation information is stored in a page table or segment table;
it is used by the memory management unit (MMU)
¢ Parts of the process address space are loaded in memory when needed
Logical address space, physical address space and address translation
¢ Address space of a process is called the logical address space;
an address in it is a logical address
¢ Memory of the computer constitutes the physical address space;
an address in it is a physical address
¢ The MMU translates a logical address into a physical one
Paged virtual memory systems
¢ A process is split into pages of equal size
“ The size of a page is a power of 2
* It simplifies the virtual memory hardware and makes it faster
“ A logical address is viewed as a pair (page #, byte #)
“ The MMU consults the page table to obtain the frame # where page page # resides
“ It juxtaposes the frame # and byte # to obtain the physical address
Address translation in a paged virtual memory system
¢ MMU uses the page # in a logical address to index the page table
¢ It uses the frame number found there to compute physical address
Fields in a page table entry
¢ Each page table entry has the following fields in it:
“ Valid bit: Indicates whether page exists in memory
* 1 : page exists in memory, 0 : page does not exist in memory
“ Page frame #: Indicates where the page is in memory
“ Prot info: Information for protection of the page
“ Ref info: Whether the page has been referenced after loading
“ Modified: Whether the page has been modified
* such a page is also called a dirty page
“ Other info: Miscellaneous info
Demand loading of pages
¢ Memory commitment would be high if the entire address space of a process is kept in memory, hence
“ Only some pages of a process are present in memory
“ Other pages are loaded in memory when needed; this action is called demand loading of pages
* The logical address space of a process is stored in the swap space
* The MMU raises an interrupt called page fault if the page to be accessed does not exist in memory
* The VM handler, which is the software component of the virtual memory, loads the required page from the swap space into an empty page frame
Demand loading of pages
¢ Reference to page 3 causes a page fault because its valid bit is 0
¢ The VM handler loads page 3 in an empty page frame and updates
its entry in the page table
Page-in, page-out and page replacement operations
¢ Three operations are needed to support demand loading of pages
“ Page-in
* A page is loaded in memory when a reference to it causes a page fault
“ Page-out
* A page is removed from memory to free a page frame
* If it is a dirty page, it is copied into the swap space
“ Page replacement
* A page-out operation is performed to free a page frame
* A page-in operation is performed into the same page frame
Page-in and page-out operations constitute page traffic
Effective memory access time
¢ Effective memory access time of logical address
(page #, byte #)@
= pr1 x 2 x access time of memory
+ (1 “ pr1) (access time of memory
+ Time required to load the page
+ 2 x access time of memory)
where pr1 is the probability that the page page # is already in memory
@ : The page table itself exists in memory
Ensuring good system performance
¢ When a page fault arises in the currently operating process, the kernel switches the CPU to another process
“ The page, whose reference had caused the page fault, is loaded in memory
“ Operation of the process that gave rise to the page fault is resumed sometime after the required page has been loaded in memory
Performance of virtual memory
¢ Performance of virtual memory depends on the hit ratio in memory
“ High values of the hit ratio are possible due to the principle of locality of reference
* It states that the next logical address referenced by a process is likely to be in proximity of the previous few logical addresses referenced by the process
This is true for instructions most of the time (because branch probability is typically approx 10%)
It is true for large data structures like arrays because loops refer to many elements of a data structure
Current locality of a process
¢ The current locality is the set of pages referenced in the previous few instructions
“ Typically, the current locality changes gradually, rather than abruptly
“ We define the proximity region of a logical address as the set of adjoining logical addresses
“ Due to the locality principle, a high fraction of logical addresses referenced by a process lie in its current locality
Proximity regions of previous references and current locality of a process
¢ The symbol designates a recently used logical address
¢ The current locality consists of recently referenced pages
¢ Proximity regions of many logical addresses are in memory
Memory allocation to a process
¢ How much memory, i.e., how many page frames should be allocated to a process
“ The hit ratio would be larger if more page frames are allocated
“ The actual number of page frames allocated to a process is a tradeoff between
* A high value to ensure high hit ratio and
* A low value to ensure good utilization of memory
Desirable variation of page fault rate with
memory allocation
¢ The page fault rate should not increase as more page frames are
allocated (however, it can remain unchanged)
¢ This property provides a method of eliminating high page fault rates
by increasing the number of page frames allocated to a process
Thrashing
¢ Thrashing is the coincidence of high page traffic and low CPU efficiency
“ It occurs when processes operate in the high page fault zone
* Each process has too little memory allocated to it
“ It can be prevented by ensuring adequate memory for each process
Functions of the paging hardware
¢ The paging hardware performs three functions
“ Address translation and generation of page faults
* MMU contains features to speed up address translation
“ Memory protection
* A process should not be able to access pages of other processes
“ Supporting page replacement
* Collects information about references and modifications of a page
Sets the reference bit when a page is referenced
Sets the Ëœmodifyâ„¢ bit when a write operation is performed
* The VM handler uses this information to decide which page to replace when a page fault occurs
Address translation
¢ The MMU uses a translation look-aside buffer (TLB) to speed up address translation
“ The TLB contains entries of the form (page #, frame #) for recently referenced pages
“ The TLB access time is much smaller than the memory access time
* A hit in the TLB eliminates one memory access to lookup the page table entry of a page
Translation look-aside buffer
¢ MMU first searches the TLB for page #
¢ The page table is looked up if the TLB search fails
Summary of actions in demand paging
¢ A page may not have an entry in TLB but may exist in memory
¢ TLB and page table have to be updated when a page is loaded
Superpages
¢ TLB reach is stagnant even though memory sizes increase rapidly as technology advances
“ TLB reach = page size x no of entries in TLB
* It indicates how much part of a process address space can be accessed through the TLB
“ TLBs are expensive, so bigger TLBs are not affordable
* Stagnant TLB reach limits effectiveness of TLBs
“ Superpages are used to increase the TLB reach
* A superpage is a power of 2 multiple of page size
* It is aligned on an address in logical and physical address space that is a multiple of its size
A TLB entry can be used for a page or a superpage
Max TLB reach = max superpage size x no of entries in TLB
¢ Size of a superpage is adapted to execution behaviour of a process
“ The VM handler combines some frequently accessed consecutive pages into a superpage (called a promotion)
* Number of pages in a superpage is a power of two
* The first page has appropriate alignment
“ It disbands a superpage if some of its pages are not accessed frequently (called a demotion)
Address translation in a multiprogrammed system
¢ Page tables (PTs) of many processes exist in memory
¢ PT address register (PTAR) points to PT of current process
¢ PT size register contains size of each process, i.e., number of pages
Memory protection
¢ MMU implements memory protection as follows:
“ Check whether a logical address (pi, bi) is valid, i.e., within process address space
* Raise a memory protection exception if pi exceeds contents of PT size register
“ Ensure that the kind of access being made is valid
* Check the kind of access with the misc info field of the page table entry
* Raise a memory protection exception if the two conflict
I/O operations in virtual memory
¢ The data area involved in an I/O operation may occupy several pages
“ If one of the pages does not exist in memory, a page fault would arise during the I/O operation
* The I/O operation may be disrupted by such page faults
“ Hence all pages involved in the I/O operation are preloaded
* An I/O fix is put on the pages (in misc info field in PT entry)
These pages are not removed from memory until the I/O operation completes
* Scatter / gather I/O: data for an I/O operation can be delivered to or gathered from non-contiguous page frames
Otherwise the page frames have to be contiguous
(a) If the I/O system provides a scatter / gather I/O operation
(b) If the I/O system does not provide scatter / gather I/O
Functions of the VM handler
¢ The VM handler performs the following functions:
“ Manage the logical address space of a process
* Organize swap space and page table of the program
* Perform page-in and page-out operations
“ Manage the physical memory
“ Implement memory protection
“ Maintain information for page replacement
* Paging hardware collects the information
* VM handler maintains it in a convenient manner and form
“ Perform page replacement
“ Allocate physical memory to processes
“ Implement page sharing
Page fault handling and page replacement
¢ When a page fault occurs, the required page has to be loaded in memory
“ The VM handler can use a free page frame, if one exists
“ Otherwise, it performs a page replacement operation
* It removes one page from the memory, thus freeing a page frame
* It loads the required page in the page frame
Page replacement operation
(a) Page 1 exists in page frame 2; it is dirty (see m bit in PT entry)
(b) It is removed from memory through a page-out operation
Page 4 is now loaded in page frame 2 and PT, FT entries are updated
Practical page table organizations
¢ The page table of a process can be very large
“ If logical addresses are 32 bits in length, and a page is 1K bytes
* The logical address space of a process
Is 4 GB in size
It contains 4 million pages
* If a page table entry is 4 bytes in length
The page table occupies 16M bytes!
Q: How to reduce the memory requirements of page tables
¢ Size of the page table should be reduced, but access to a page table entry should not become (much) slower
“ Inverted page tables (IPT)
* Each entry contains information about a page frame rather than about a page
* Size of the IPT depends on size of the physical address space rather than on size of logical address space of a process
Physical address spaces are smaller than logical ones!
“ Multi-level page tables
* A page table is itself demand paged, so exists only partly in memory
We have two kinds of pages in memory: pages of processes and pages of page tables (PT pages)
Inverted page table (IPT)
¢ The Inverted page table contains pairs of the form (process id, page id)
“ While performing address translation for the logical address (pi, bi ) of process P
* The MMU forms a pair (P, pi)
* Searches for the pair in the IPT
Raises a page fault if the pair does not exist in memory
Entry number in IPT where it is found is the page frame number
“ A hash table is used to speed up the search in IPT and make address translation more efficient
* Now the frame number where a page is loaded has to be explicitly stored in IPT; It is used in address translation
Inverted page tables:
(a) concept, (b) implementation using a hash table
(a) When page pi of P is loaded in memory, pair (P, pi ) is hashed
and also entered in IPT
(b) Pairs hashing into the same hash table entry are linked in IPT;
MMU searches for a pair through the hash table and takes Frame #
Concept of two-level page table
¢ The page table is itself paged
¢ During address translation, MMU checks whether the relevant
page of the PT is in memory. If not, it loads that PT page
¢ Required page of process is accessed through this PT page
Address translation using two-level page table
¢ Address translation proceeds as follows:
“ Page number pi in address (pi, bi) is split into two parts
(PT page #, entry in PT page #)
* Where ËœPT page #â„¢ is the number of the PT page that contains the page table entry for page pi
“ The number of page table entries in a PT page is a power of 2, so bit splitting is used for this operation
“ Address translation is performed as follows:
* MMU raises a page fault if ËœPT page #â„¢ is not present in memory
* Otherwise, it accesses the entry Ëœentry in PT page #â„¢ in this page.
This is the page table entry of pi
MMU raises a page fault if this page is not present in memory
Two level page table organization

¢ pi is split into two parts
¢ One is used to access entry of the PT page in higher order PT
¢ The other one is used to access PT entry of pi
¢ bi is used to access required byte in the page
Multi-level page tables
¢ These tables are generalizations of the two-level page tables
“ Two and multi-level page tables have been used
* Intel 30386 used two-level page tables
* Sun Sparc uses three-level page tables
* Motorola 68000 uses four-level page tables
VM Handler modules in a paged system
¢ Page-in and page-out are paging mechanisms
¢ The paging policy uses information in VM handler tables and
invokes these mechanisms when needed
Page replacement policies
¢ Three primary policies
“ Optimal policy
* Not realizable in practice
* We use it only to evaluate other algorithms
“ FIFO policy
* Needs information about when a page was loaded in memory
“ Least-recently-used (LRU) policy
* Needs information about when a page was last used
* Needs a Ëœtime stampâ„¢ of the last use of a page
¢ The VM hardware and software has to collect additional information to facilitate page replacement decisions
Page reference string
¢ A page reference string is a sequence of page numbers containing the pages referenced by a process during an execution, e.g.
1, 5, 3, 1, 2, 4, 1, 5, 3.
“ If 3 page frames are allocated to a process, the page replacement algorithms will make the following decisions when page 2 is accessed
* FIFO page replacement algorithm would replace page 1
* LRU page replacement algorithm would replace page 5
* Optimal page replacement algorithm would replace ¦ which page
ËœPreempt the farthest referenceâ„¢ is one of the optimal strategies
Comparison of page replacement policies with
alloc = 2
¢ The misc info field contains a time-stamp
¢ For FIFO, it is the time of loading
¢ For LRU, it is the time of last reference
Properties of page replacement policies
¢ The number of page faults should not increase if the memory allocation for a process is increased
“ This requirement is satisfied if a page replacement policy possesses the inclusion property (also called the stack property)
“ This property is important for ensuring good system performance
* If a process has a high page fault rate, increasing its memory allocation
May or may not reduce its page fault rate
Would not increase its page fault rate
Inclusion property (also called stack property)
¢ Notation:
“ { pi }kn is the set of pages existing in memory at time tk+ if n page
frames are allocated to a process all through its execution
¢ A page replacement algorithm possesses the inclusion property if
{ pi }kn is included in { pi }kn+1
“ This way, page faults would not increase if the process is
allocated n+1 page frames instead of n frames (Q: Why)
“ Consider the page reference string
5, 4, 3, 2, 1, 4, 3, 5, 4, 3, 2, 1, 5 ¦
Performance of FIFO and LRU page replacement
¢ The element 4* in the page reference string indicates that a
page fault occurred when page 4 was referenced
Performance of FIFO and LRU page replacement
¢ In FIFO replacement, page faults may increase as more memory is
allocated. This is called Beladyâ„¢s anomaly
¢ LRU possesses the stack property, hence it does not suffer the anomaly
Practical page replacement policies
¢ Comparison
“ FIFO does not perform well in practice
* Because it does not possess the inclusion property
“ LRU algorithm is impractical
* Because of the hardware cost of storing time of last reference; it requires a time-stamp in each entry of the page table
“ Second chance algorithm is a hybrid algorithm
* Pages are entered in FIFO queue, which is scanned in circular manner when a page replacement is necessary
* The reference bit of the next page is checked
If the bit is set, it is reset and the scan proceeds
If the bit is not set, the page is replaced
Performance of FIFO and second-chance page replacement policies
¢ 4* indicates that a page fault occurred when page 4 was referenced
¢ In the second chance algorithm, a ˜.™ next to a page implies that
the reference bit of the page is Ëœonâ„¢
Clock algorithm
¢ Analogous to the second chance algorithm
“ The pointer that moves over all pages in a scan is like a clock hand
¢ Two handed clock algorithm
“ A resetting hand (RH) and an examining hand (EH). When a page is to be replaced, the hands are moved synchronously
* The reference bit of the page pointed to by RH is reset
* The page pointed to by EH is examined
If its reference bit is Ëœonâ„¢, the scan proceeds
Otherwise, the page pointed to by EH is replaced
Determining the memory allocation for a process
¢ Notation:
“ alloci is the number of page frames allocated to process Pi
¢ How much memory should be allocated to a process
“ Too little memory
* Page fault rate would be high
“ Too much memory
* Degree of multiprogramming in the system would be low
“ System performance is affected in both these situations
¢ Thrashing is a situation in which each process in the system is allocated too little memory
“ (In some literature, a process is said to thrash if it is allocated too little memory)
“ Thrashing is characterized by two properties
* Too much page I/O is performed, i.e., too many page-ins / outs
* CPU does not have enough work to perform
“ Q: How to avoid thrashing
* Allocate adequate amount of memory to a process
* Vary the memory allocation to suit requirements of the process
* The notion of working set aids in both of these
Working set
¢ The working set (WS) of a process is the set of pages referenced by it in the previous machine instructions
“ The VM handler determines the WS of a process periodically
* It determines WS size (WSS) and allocates those many page frames
* If it cannot allocate those many page frames, it swaps out that process completely. This avoids thrashing
“ It is called the working set memory allocator
“ Since process locality changes gradually, rather than abruptly, each process performs well most of the time
Operation of a working set memory allocator
¢ The system has 60 page frames
¢ At t100, 8 page frames are free but WSS4=10, so proc4 is swapped out
¢ At t300, proc4 is swapped-in because alloc4 = WSS4 is possible
¢ At t400, proc4 is swapped-out once again
Program sharing in virtual memory systems
¢ If two or more processes wish to use the code of the same program, is it possible to have only one copy of the program in memory
Dynamic sharing of program C
by processes A and B
¢ Program C occupies pages i and i +1 in the address space of
process A but j and j +1 in address space of process B
¢ Hence the operand of the Add instruction should be different
¢ Hence a single copy of C cannot be shared by processes A and B
¢ Dynamic sharing of a single copy of program C by processes A and B
¢ Two conditions have to be satisfied for such sharing
¢ Program C should be reentrant
¢ This way, it can be invoked simultaneously by A and B
¢ Program C should have identical addresses in the logical address spaces of processes A and B
¢ Page table entries of both A and B would point to the same copy of the pages of C
¢ Collection of page reference information becomes cumbersome
¢ Consequently, VM handler may handle shared pages separately
Copy-on-write
¢ Parent and child processes share their address spaces. Memory and swap space can be optimized by not replicating the pages
“ The copy-on-write technique shares a single copy of a page until it is modified by one of the processes
* A page is made read only in both address spaces and a single copy of the page exists
* When a process modifies a page, it leads to a protection fault. A private copy of the page is now made for the process and added to its page table
* Thus multiple copies exist only for modified pages
Memory mapped files
¢ Memory mapping of a file by a process binds that file to a part of the logical address space of the process
“ When the process refers to data in the file, the VM handler organizes access to the corresponding part of its own address space
* Loading and writing of data in the file are performed by the VM handler analogous to the loading and replacement of pages in the logical address space of a process
Memory mapping of files
¢ Benefits
“ Less memory-to-memory copying
* When data is read from a file, it is first read into an I/O buffer, and then moved into the logical address space of a process
* Memory mapping avoids this memory-to-memory copying
“ Fewer disk operations
* A file page existing in memory may be accessed several times, thus avoiding several disk accesses
“ Prefetching of data
* Data may be loaded in memory before it is accessed (because some other data in the same file page was accessed)
“ Efficient data access
* Irrespective of the file organization
Virtual memory implementation using segmentation
¢ The VM handler maintains a segment table (ST)
¢ The MMU uses its contents for address translation of a logical
address (si , bi)
Use of symbolic segment and byte ids
¢ ids are associated with bytes within a segment
¢ The compiler makes this information available to the VM handler
¢ The VM handler puts this information in the segment page link table (SLT)
¢ The MMU uses it for address translation
Address translation in segmentation with paging
¢ A page table is maintained for each segment
¢ The MMU splits a logical address into three components”
si, pi and bi
Virtual memory in Unix
¢ Text pages and the swap space
“ A text (i.e., code) page is initially loaded from a code file
* The page is written in the swap file when swapped out; this way, only used text pages are added to the swap space
* When a process page faults for such a page, it may be in use by another process. In this case, a page-in operation is avoided
“ A data page is called a zero-filled page. When it is removed from memory, it is written into the swap space
“ Swap space is allocated in increments of a few disk blocks. A process has to be cancelled if its swap space cannot be increased when needed
¢ A copy-on-write feature is provided
¢ Paging
“ The VM handler keeps 5% of page frames on the free list
“ It activates the pageout daemon if the number falls below 5%. The pageout daemon adds some page frames to the list and goes to sleep
* It maintains two lists of pages”active and inactive lists
* It always keeps a certain fraction of pages in the inactive list
* When it is activated by VM handler
¢ It swaps out inactive processes and examines their page frames and page frames of other processes that contain inactive pages
¢ Adds clean pages to the free list and starts pageout operations on dirty pages
Virtual memory in Linux
¢ Paging (contd)
“ Linux uses three level page tables on 64-bit architectures, and a buddy system allocator
“ VM handler maintains a sufficient number of free page frames using a clock algorithm
* If a process page faults for a page contained in a page frame that is marked free, it is simply Ëœconnectedâ„¢ to the process
“ File backed memory region helps memory mapping of files, the private memory region provides copy-on-write
Virtual memory in Windows
¢ Salient features of Windows virtual memory
“ The kernel is mapped into a part of each address space
“ VM handler uses two level page tables
“ It maintains state of each page frame
* A standby page is not a part of any working set, but it could be simply Ëœconnectedâ„¢ to a process that page faults for it
“ A section object represents shareable memory
* Each sharing process has a view of the object
* When a view is accessed for the first time, memory is allocated to that part of the object which is covered by the view
“ A copy-on-write feature is provided
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: virtual memory management free download, pseudo lru, virtual memory seminar report, virtual memory ppt, ipt in pondi dhordhasan, secura frames, virtual memory amount,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  computer networks full report seminar topics 8 42,408 06-10-2018, 12:35 PM
Last Post: jntuworldforum
  OBJECT TRACKING AND DETECTION full report project topics 9 30,879 06-10-2018, 12:20 PM
Last Post: jntuworldforum
  Resistive random-access memory (RRAM) project topics 4 3,224 13-04-2017, 10:49 AM
Last Post: jaseela123d
  imouse full report computer science technology 3 25,118 17-06-2016, 12:16 PM
Last Post: ashwiniashok
  Implementation of RSA Algorithm Using Client-Server full report seminar topics 6 26,835 10-05-2016, 12:21 PM
Last Post: dhanabhagya
  Optical Computer Full Seminar Report Download computer science crazy 46 66,702 29-04-2016, 09:16 AM
Last Post: dhanabhagya
  ethical hacking full report computer science technology 41 74,813 18-03-2016, 04:51 PM
Last Post: seminar report asees
  broadband mobile full report project topics 7 23,581 27-02-2016, 12:32 PM
Last Post: Prupleannuani
  steganography full report project report tiger 15 41,628 11-02-2016, 02:02 PM
Last Post: seminar report asees
  Digital Signature Full Seminar Report Download computer science crazy 20 44,023 16-09-2015, 02:51 PM
Last Post: seminar report asees

Forum Jump: