Subscribe by Email


Showing posts with label Page Replacement. Show all posts
Showing posts with label Page Replacement. Show all posts

Tuesday, June 25, 2013

Explain about demand paging and page replacements

These are two very important concepts of memory management strategies in the computer operating systems namely demand paging and paging replacements. 

About Demand Paging
- Demand paging is just the opposite concept of the anticipatory paging. 
Demand paging is actually a memory management strategy developed for managing the virtual memory.
- The operating system that makes use of demand paging technique, a copy of the disk page is made and kept in the physical memory whenever a request is made for it i.e., whenever a page fault occurs. 
- It is obvious that the execution of a process starts with none of its page loaded in to the main memory and follows by a number of page faults occurring one after the other until all of its required pages have been loaded in to the main memory. 
- Demand paging comes under the category of the lazy loading techniques. 
This strategy follows that only if the process in execution demands a page, then only it should be brought in to the main memory. 
- That’s why the strategy has been named as demand paging. Sometimes it is even called as the lazy evaluation. 
- Page table implementation is required for using the demand paging technique.
- The purpose of this table is to map the physical memory to the logical memory. 
- This table uses a bit wise operator for marking a page as valid or invalid. 

The following steps are carried out whenever a process demands for a page:
  1. An attempt is made for accessing the page.
  2. If page is present in the memory the usual instructions are followed.
  3. If page is not there i.e., is invalid then a page fault is generated.
  4. Memory reference to a location in the virtual memory is checked if it is valid or not. If it’s an illegal memory access then the process is terminated. If not the requested page has to be paged in.
  5. The disk operations are scheduled for reading the requested page in to the physical memory.
  6. Restarting the instruction that raised the page fault trap.
- The nature of this strategy is itself of great advantage. 
- Upon availability of more space in the physical memory, it allows execution of many processes leading to a decrease in the context switching time.
- At the time of program start up, less latency occurs during loading. 
- This is because the inflow and outflow of the data between main memory and secondary memory is very less.


About Page Replacement
- When less number of real memory frames is available, it leads to invoking a page stealer. 
- This stealer searches through the PFT (page frame table) for pages to steal. 
This table stores references to the pages which are required and modified. 
- If the requested page is found by the page stealer, it does not steal it but the reference flag is reset for that page. 
- So in the pass when the page stealer comes across this page, it steals this page. 
- Note that in this pass the page was flagged as un-referenced. 
- Any change made to the page is indicated by means of the modify flag.
- If the modify flag of the page to be stolen is set, then a page out call has to be made before the page stealer does its work. 
- Thus, the pages that form a part of the currently executing segments are written to so called paging space and the persisting segments are in turn written to the disk. 
- The page replacement is carried by the algorithms called the page replacement algorithms. 
- Besides this, these also keep a track of the faults. 


Monday, June 24, 2013

Explain the page replacement algorithms - FIFO, LRU, and Optimal

- Paging is used by most of the computer operating systems for the purpose of virtual memory management. 
- Whenever a page fault occurs, some pages are swapped in and swapped out. Who decides which pages are to be replaced and how? 
- This purpose is served by the page replacement algorithms. 
- Page replacement algorithms only decide which pages are to be page out or written to the disk when a page of the memory has to be allocated. 
- Paging takes place only upon the occurrence of a page fault.
- In such situations a free cannot suffice because either one is not available or because the number of available pages is less than threshold. 
- If a previously paged out page is reference again, then it has to be read in from the disk again. 
- But for doing this, the operating system has to wait for the completion of the input/ output operation. 
- The quality of a page replacement algorithm is denoted by the time it takes for a page in. 
- The lesser it is, the better the algorithm is. 
- The information about the access to page as provided by the hardware is studied by the page replacement algorithm and then it decides which pages should be replaced so that the number of page faults can be minimized. 

In this article we shall see about some of the page replacement algorithms.

FIFO (first – in, first – out): 
- This one being the simplest of all the page replacement algorithms has the lowest overhead and works by book – keeping in place of the OS. 
- All pages are stored in a queue in the memory by the operating system.
- The ones that have recently arrived are kept at the back while the old ones stand at the front end of the queue.
- While making replacement, the oldest page is selected and replaced. 
- Even though this replacement algorithm is cheap as well as intuitive, practically it does not perform well. 
- Therefore, it is rarely used in its original form. 
- VAX/ VMS operating systems make use of the FIFO replacement algorithm after making some modifications. 
- If a limited number of entries are skipped you get a partial second chance. 

Least recently used (LRU): 
- This replacement algorithm bears resemblance in name with the NRU. 
However, difference is there which is that this algorithm follows that the page usage is tracked for a certain period of time. 
- It is based on the idea that the pages being used many times in current set of instructions will be used heavily in the next set of the instructions also. 
- Near optimal performance is provided by LRU algorithm in the theory however in practical it is quite expensive to be implemented. 
- For reducing the costs of the implementation of this algorithm, few implementation methods have been devised. 
- Out of these the linked list method proves to be the costliest. 
- This algorithm is so expensive because it involves moving the items about the memory which is indeed a very time consuming task. 
- There is another method that requires hardware support.

Optimal page replacement algorithm: 
- This is also known as the clairvoyant replacement algorithm. 
- The page that has to be swapped in, it is placed in place of the page which according to the operating system will be used in future after a very long time. - In practical, this algorithm is impossible to be implemented in case of the general purpose OS because approximate time when a page will be used is difficult to be predicted. 


Thursday, January 21, 2010

Page Buffering Algorithm

There are some other procedures that are often used in addition to a specific page replacement algorithm.

- System commonly keep a pool of free frames. When a page fault occurs, a victim frame is chosen as before. However, the desired page is read into a free frame from the pool before the victim is written out. This procedure allows the process to restart as soon as possible, without waiting for the victim page to be written out. When the victim is later written out, its frame is added to the free-frame pool.
- Whenever the paging device is idle, a modified page is selected and is written to the disk. Its modify bit is then reset. This scheme increases the probability that a page will be clean when it is selected for replacement, and will not need to be written out.
- Another modification is to keep a pool of free frames, but to remember which page was in each frame. Since the frame contents are not modified when a frame is written to the disk, the old page can be reused directly from the free-frame pool if it is needed before that frame is reused. No I/O is needed in this case. When a page fault occurs, we first check whether the desired page is in the free-frame pool. If it is not, we must select a free frame and read into it.

This technique is used in VAX/VMS system, with a FIFO replacement algorithm. When the FIFO replacement algorithm mistakenly replaces a page that is still in active use, that page is quickly retrieved from the free-frame buffer, and no I/O is necessary.


Monday, January 18, 2010

Aging Page Replacement Algorithm

The aging algorithm is a descendant of the NFU algorithm, with modifications to make it aware of the time span of use.Instead of just incrementing the counters of pages referenced, putting equal emphasis on page references regardless of the time, the reference counter on a page is first shifted right (divided by 2), before adding the referenced bit to the left of that binary number.Suppose that after the first clock tick the R bits for pages 0 to 5 have the values 1,0, 1, 0, 1, and 1, respectively (page 0 is 1, page 1 is 0, page 2 is 1, etc.). In other words, between tick 0 and tick 1, pages 0, 2, 4, and 5 were referenced, setting their R bits to 1, while the other ones remain 0.

Aging

When a page fault occurs, the page whose counter is the lowest is removed. It
is clear that a page that has not been referenced for, say, four clock ticks will have four leading zeros in its counter and thus will have a lower value than a counter that has not been referenced for three clock ticks.

Aging differs from LRU in the sense that aging can only keep track of the references in the latest 16/32 (depending on the bit size of the processor's integers) time intervals. Consequently, two pages may have referenced counters of 00000000, even though one page was referenced 9 intervals ago and the other 1000 intervals ago. Generally speaking, knowing the usage within the past 16 intervals is sufficient for making a good decision as to which page to swap out. Thus, aging can offer near-optimal performance for a moderate price.


Random and Not Frequently Used (NFU) Page replacement Algorithms

- Random replacement algorithm replaces a random page in memory. This eliminates the overhead cost of tracking page references. Usually it fares better than FIFO, and for looping memory references it is better than LRU, although generally LRU performs better in practice. OS/390 uses global LRU approximation and falls back to random replacement when LRU performance degenerates, and the Intel i860 processor used a random replacement policy.

- The Not Frequently Used (NFU) page replacement algorithm requires a counter, and every page has one counter of its own which is initially set to 0. At each clock interrupt,the operating system scans all the pages in memory. For each page, the R
bit, which is 0 or 1, is added to the counter. In effect, the counters are an attempt to keep track of how often each page has been referenced. When a page fault
occurs, the page with the lowest counter is chosen for replacement.
The main problem with NFU is that it never forgets anything. For example, in a multi-pass compiler, pages that were heavily used during pass 1 may still have a high count well into later passes. In fact, if pass 1 happens to have the longest execution time of all the passes, the pages containing the code for subsequent passes may always have lower counts than the pass 1 pages. Consequently, the operating system will remove useful pages instead of pages no longer in use.

Fortunately, a small modification to NFU makes it able to simulate LRU quite well. The modification has two parts. First, the counters are each shifted right 1 bit before the R bit is added in. Second, the R bit is added to the leftmost, rather than the rightmost bit.


Least Recently Used Page Replacement Algorithm - LRU

The least recently used (LRU) algorithm applies to a cache memory having a number of independently replaceable pages. When the cache is full and a memory reference is not found in any page in the cache, the page to be replaced is the one which has not been accessed for the longest amount of time. An ordered history of page numbers, called an LRU stack, must be generated for successive memory references.
Although LRU is theoretically realizable, it is not cheap. To fully implement LRU, it is necessary to maintain a linked list of all pages in memory, with the most recently used page at the front and the least recently used page at the rear. The difficulty is that the list must be updated on every memory reference. Finding a page in the list, deleting it, and then moving it to the front is a very time consuming operation, even in hardware.

However, LRU can be implemented using special hardware. Suppose the hardware has a 64-bit counter that is incremented at every instruction. Whenever a page is accessed, it gains a value equal to the counter at the time of page access. Whenever a page needs to be replaced, the operating system selects the page with the lowest counter and swaps it out. With present hardware, this is not feasible because the required hardware counters do not exist.
One important advantage of the LRU algorithm is that it is amenable to full statistical analysis. It has been proven, for example, that LRU can never result in more than N-times more page faults than OPT algorithm, where N is proportional to the number of pages in the managed pool.
On the other hand, LRU's weakness is that its performance tends to degenerate under many quite common reference patterns.


Friday, January 15, 2010

Clock Page Replacement Algorithm

Although second chance is a reasonable algorithm, it is unnecessarily inefficient because it is constantly moving pages around on its list. A better approach is to keep all the page frames on a circular list in the form of a clock.

The clock algorithm keeps a circular list of pages in memory, with the "hand" (iterator) pointing to the oldest page in the list. When a page fault occurs and no empty frames exist, then the R (referenced) bit is inspected at the hand's location.
* If its R bit is 0, the page is evicted, the new page is inserted into the clock in its place, and the hand is advanced one position.
* If R is 1, it is cleared and the hand is advanced to the next page. This process is repeated until a page is found with R = 0.

Clock Page Replacement Algorithm

Variants on Clock :
- Clock-Pro keeps a circular list of information about recently-referenced pages, including all M pages in memory as well as the most recent M pages that have been paged out. This extra information on paged-out pages, like the similar information maintained by ARC, helps it work better than LRU on large loops and one-time scans.
- WSclock: The "aging" algorithm and the "WSClock" algorithm are probably the most important page replacement algorithms in practice.
- CAR is a page replacement algorithm that has performance comparable to ARC, and substantially outperforms both LRU and CLOCK. The algorithm CAR is self-tuning and requires no user-specified magic parameters.


Second Chance Page Replacement Algorithm

- A simple modification to FIFO that avoids the problem of throwing out a heavily used page is to inspect the R bit of the oldest page.
- If it is 0, the page is both old and unused, so it is replaced immediately.
- If the R bit is 1, the bit is cleared, the page is put onto the end of the list of pages, and its load time is updated as though it had just arrived in memory. Then the search continues. The operation of this algorithm, called second chance.
* Suppose that a page fault occurs at time 20. The oldest page is A, which arrived at time 0, when the process started.
* If A has the R bit cleared, it is evicted from memory. On the other hand, if the R bit is set, A is put onto the end of the list and its ``load time'' is reset to the current time (20). The R bit is also cleared. The search for a suitable page continues with B.
- What second chance is doing is looking for an old page that has not been referenced in the previous clock interval. If all the pages have been referenced, second chance degenerates into pure FIFO.

Second Chance Page Replacement Algorithm
Operation of second chance.
(a) Pages sorted in FIFO order.
(b) Page list if a page fault occurs at time 20 and A has its R bit set. The numbers above the pages are their loading times.


Thursday, January 14, 2010

First-In First-Out Page Replacement Algorithm

The first-in, first-out (FIFO) page replacement algorithm is a low-overhead algorithm that requires little book-keeping on the part of the operating system. The operating system maintains a list of all pages currently in memory, with the page at the head of the list the oldest one and the page at the tail the most recent arrival. On a page fault, the page at the head is removed and the new page added to the tail of the list.
It is not strictly necessary to record the time when a page is brought in. We can create a FIFO queue to hold all the pages in memory. We replace the page at the head of the queue. When a page is brought into memory, we insert it at the tail of the queue.

While FIFO is cheap and intuitive, it performs poorly in practical application. Thus, it is rarely used in its unmodified form. The page replaced may be an initialization module that was used a long time ago and is no longer needed. On the other hand, it could contain a heavily used variable that was initialized early and is in constant use.
FIFO page replacement algorithm is used by the VAX/VMS operating system, with some modifications. Partial second chance is provided by skipping a limited number of entries with valid translation table references, and additionally, pages are displaced from process working set to a system wide pool from which they can be recovered if not already re-used.


Not Recently Used (NRU) Page Replacement Algorithm

The not recently used (NRU) page replacement algorithm is an algorithm that favors keeping pages in memory that have been recently used. This algorithm works on the following principle: when a page is referenced, a referenced bit is set for that page, marking it as referenced. Similarly, when a page is modified (written to), a modified bit is set. The setting of the bits is usually done by the hardware, although it is possible to do so on the software level as well.

- When a page fault occurs, the operating system inspects all the pages and divides them into four categories based on the current values of their R (referenced; read or written) and M (modified) bits:
* Class 0: not referenced, not modified.
* Class 1: not referenced, modified.
* Class 2: referenced, not modified.
* Class 3: referenced, modified.
- Although class 1 pages seems, at first glance, impossible, they occur when a class 3 page has its R bit cleared by a clock interrupt. Clock interrupts do not clear the M bit because this information is needed to know whether the page has to be rewritten to disk or not. Clearing R but not M leads to a class 1 page.
- The NRU (Not Recently Used) algorithm removes a page at random from the lowest numbered nonempty class.
- It is better to remove a modified page that has not been referenced in at least one clock tick (typically 20 msec) than a clean page that is in heavy use.
- The main attraction of NRU is that it is easy to understand, moderately efficient to implement, and gives a performance that, while certainly not optimal, may be adequate.


Tuesday, January 12, 2010

Optimal Page Replacement Algorithm

Each page can be labeled with the number of instructions that will be executed before that page is first referenced. The optimal page algorithm simply says that the page with the highest label should be removed. If one page will not be used for 8 million instructions and another page will not be used for 6 million instructions, removing the former pushes the page fault that will fetch it back as far into the future as possible.
The only problem with this algorithm is that it is unrealizable. At the time of the page fault, the operating system has no way of knowing when each of the pages will be referenced next. Still, by running a program on a simulator and keeping track of all page references, it is possible to implement optimal page replacement on the second run by using the page reference information collected during the first run.

Despite this limitation, algorithms exist that can offer near-optimal performance — the operating system keeps track of all pages referenced by the program, and it uses those data to decide which pages to swap in and out on subsequent runs. This algorithm can offer near-optimal performance, but not on the first run of a program, and only if the program's memory reference pattern is relatively consistent each time it runs.

In short :
* Belady's optimal algorithm for the minimum number of page faults.
* Replace the page that will be referenced furthest in the future or not at all.
* Problem: we cannot implement it, because we cannot predict the future.
* This is the best case.
* Can use it to compare against other algorithms.


Page Replacement

Page replacement algorithms decide which memory pages to page out (swap out, write to disk) when a page of memory needs to be allocated. Paging happens when a page fault occurs and a free page cannot be used to satisfy the allocation, either because there are none, or because the number of free pages is lower than some threshold. Requirements for page replacement algorithms have changed due to differences in operating system kernel architectures. In particular, most modern OS kernels have unified virtual memory and file system caches, requiring the page replacement algorithm to select a page from among the pages of both user program virtual address spaces and cached files.

Replacement algorithms can be local or global.
When a process incurs a page fault, a local page replacement algorithm selects for replacement some page that belongs to that same process (or a group of processes sharing a memory partition). A global replacement algorithm is free to select any page in memory.
There are a variety of page replacement algorithms :
- Optimal page replacement algorithm
- Not Recently Used (NRU)
- First-in, first-out (FIFO)
- Second-chance
- Clock
- Least recently used (LRU)
- Random replacement algorithm
- Not frequently used (NFU)
- Aging


Facebook activity