Subscribe by Email


Showing posts with label Paging. Show all posts
Showing posts with label Paging. Show all posts

Tuesday, June 25, 2013

Explain about demand paging and page replacements

These are two very important concepts of memory management strategies in the computer operating systems namely demand paging and paging replacements. 

About Demand Paging
- Demand paging is just the opposite concept of the anticipatory paging. 
Demand paging is actually a memory management strategy developed for managing the virtual memory.
- The operating system that makes use of demand paging technique, a copy of the disk page is made and kept in the physical memory whenever a request is made for it i.e., whenever a page fault occurs. 
- It is obvious that the execution of a process starts with none of its page loaded in to the main memory and follows by a number of page faults occurring one after the other until all of its required pages have been loaded in to the main memory. 
- Demand paging comes under the category of the lazy loading techniques. 
This strategy follows that only if the process in execution demands a page, then only it should be brought in to the main memory. 
- That’s why the strategy has been named as demand paging. Sometimes it is even called as the lazy evaluation. 
- Page table implementation is required for using the demand paging technique.
- The purpose of this table is to map the physical memory to the logical memory. 
- This table uses a bit wise operator for marking a page as valid or invalid. 

The following steps are carried out whenever a process demands for a page:
  1. An attempt is made for accessing the page.
  2. If page is present in the memory the usual instructions are followed.
  3. If page is not there i.e., is invalid then a page fault is generated.
  4. Memory reference to a location in the virtual memory is checked if it is valid or not. If it’s an illegal memory access then the process is terminated. If not the requested page has to be paged in.
  5. The disk operations are scheduled for reading the requested page in to the physical memory.
  6. Restarting the instruction that raised the page fault trap.
- The nature of this strategy is itself of great advantage. 
- Upon availability of more space in the physical memory, it allows execution of many processes leading to a decrease in the context switching time.
- At the time of program start up, less latency occurs during loading. 
- This is because the inflow and outflow of the data between main memory and secondary memory is very less.


About Page Replacement
- When less number of real memory frames is available, it leads to invoking a page stealer. 
- This stealer searches through the PFT (page frame table) for pages to steal. 
This table stores references to the pages which are required and modified. 
- If the requested page is found by the page stealer, it does not steal it but the reference flag is reset for that page. 
- So in the pass when the page stealer comes across this page, it steals this page. 
- Note that in this pass the page was flagged as un-referenced. 
- Any change made to the page is indicated by means of the modify flag.
- If the modify flag of the page to be stolen is set, then a page out call has to be made before the page stealer does its work. 
- Thus, the pages that form a part of the currently executing segments are written to so called paging space and the persisting segments are in turn written to the disk. 
- The page replacement is carried by the algorithms called the page replacement algorithms. 
- Besides this, these also keep a track of the faults. 


Monday, June 24, 2013

Explain the page replacement algorithms - FIFO, LRU, and Optimal

- Paging is used by most of the computer operating systems for the purpose of virtual memory management. 
- Whenever a page fault occurs, some pages are swapped in and swapped out. Who decides which pages are to be replaced and how? 
- This purpose is served by the page replacement algorithms. 
- Page replacement algorithms only decide which pages are to be page out or written to the disk when a page of the memory has to be allocated. 
- Paging takes place only upon the occurrence of a page fault.
- In such situations a free cannot suffice because either one is not available or because the number of available pages is less than threshold. 
- If a previously paged out page is reference again, then it has to be read in from the disk again. 
- But for doing this, the operating system has to wait for the completion of the input/ output operation. 
- The quality of a page replacement algorithm is denoted by the time it takes for a page in. 
- The lesser it is, the better the algorithm is. 
- The information about the access to page as provided by the hardware is studied by the page replacement algorithm and then it decides which pages should be replaced so that the number of page faults can be minimized. 

In this article we shall see about some of the page replacement algorithms.

FIFO (first – in, first – out): 
- This one being the simplest of all the page replacement algorithms has the lowest overhead and works by book – keeping in place of the OS. 
- All pages are stored in a queue in the memory by the operating system.
- The ones that have recently arrived are kept at the back while the old ones stand at the front end of the queue.
- While making replacement, the oldest page is selected and replaced. 
- Even though this replacement algorithm is cheap as well as intuitive, practically it does not perform well. 
- Therefore, it is rarely used in its original form. 
- VAX/ VMS operating systems make use of the FIFO replacement algorithm after making some modifications. 
- If a limited number of entries are skipped you get a partial second chance. 

Least recently used (LRU): 
- This replacement algorithm bears resemblance in name with the NRU. 
However, difference is there which is that this algorithm follows that the page usage is tracked for a certain period of time. 
- It is based on the idea that the pages being used many times in current set of instructions will be used heavily in the next set of the instructions also. 
- Near optimal performance is provided by LRU algorithm in the theory however in practical it is quite expensive to be implemented. 
- For reducing the costs of the implementation of this algorithm, few implementation methods have been devised. 
- Out of these the linked list method proves to be the costliest. 
- This algorithm is so expensive because it involves moving the items about the memory which is indeed a very time consuming task. 
- There is another method that requires hardware support.

Optimal page replacement algorithm: 
- This is also known as the clairvoyant replacement algorithm. 
- The page that has to be swapped in, it is placed in place of the page which according to the operating system will be used in future after a very long time. - In practical, this algorithm is impossible to be implemented in case of the general purpose OS because approximate time when a page will be used is difficult to be predicted. 


Friday, June 21, 2013

Explain about the Paged Memory and Segmentation techniques?

Paging and segmentation, both are memory management techniques. 

What is Paging?

- This technique has been designed so that the system can store or retrieve data from the virtual memory or secondary memory of the system to be loaded in the main memory and used. 
- In this scheme, the data from the secondary memory is retrieved by the operating system in blocks of same size commonly known as the paging. 
- This is why the technique has been called the paging memory – management scheme. 
- This memory management scheme has a major advantage over the segmentation scheme. 
- The advantage is that non-contiguous address spaces are allowed. 
- In segmentation, non-contiguous physical address spaces are not allowed. 
Before the paging actually came in to use, the whole program had to be fitted in to the contiguous memory space by the systems. 
- This in turn led to a number of issues related to fragmentation and storage. 
Paging is very important for the implementation of the virtual memory in many of the operating systems that are general purpose. 
- With the help of paging memory management technique, the data that cannot be fitted in to the physical memory i.e., RAM can be easily used. 
- Paging actually comes in to play whenever a program makes an attempt for accessing the pages that have not been presently mapped to the main memory (RAM). 
- Such situation is termed as the page fault. 
- At this point the control is handed over to the operating system for handling the page fault.
- This is done in a way that it is not visible to the interrupt raising program. 

The operating system has to carry out the following instructions:
  1. Determining the location of the requested data from the auxiliary storage.
  2. Obtaining a page frame in the main memory that is empty to be used for storing the requested data.
  3. Loading the data requested in to the empty page obtained above.
  4. Making updates to the page table so that new data is only available.
  5. Returning the control interrupting program and retrying to execute the same instruction that caused the fault.

What is Segmentation?

- This memory management technique involves dividing the main memory in to various sections or segments.
- In the system that makes use of this management technique, a value identifying the segment and its offset is contained in the reference to that memory location. 
- Object files that are produced during the compilation of the programs make use of the segments when they have to be linked together to form an image of the program and this image has to be loaded in to the memory.  
- For different program modules, different segments might be created. 
- Some programs may even share some of the segments.
- In one way, memory protection is implemented by means of memory segmentation only.
- Paging and segmentation can be combined together for memory protection. 
- The size of memory segment is not always fixed and can be as small as a byte. 
- Natural divisions such as the data tables or the individual routines are represented by the segments.
This is to make the segmentation visible to the programmer. 
- With every segment, a set of permissions and length is associated. 
- A segment can be referred to by the process only in a way that is permitted by this set of permissions. 
- If this is not done, a segmentation fault is raised by the operating system. 
Segments also consist of a flag that indicates the presence of the segment in the main memory of the system. 


Friday, April 26, 2013

What is the cause of thrashing? How does the system detect thrashing? Once it detects thrashing, what can the system do to eliminate this problem?


- Thrashing takes place when the sub-system of the virtual memory of the computer system is involved in a state of paging constantly.
- It rapidly exchanges data in the memory with the data available on the disk excluding level of processing of most of the applications. 
- Thrashing leads to the degradation of the performance of the computer or may even cause it to collapse. 
- The problem may further worsen until the issue is identified and addressed. 
- If there are not enough pages available for the job, it becomes very likely that your system will suffer from thrashing since it’s an activity involving high paging. 
- This also leads to high rate of page fault. 
- This in turn cuts down the utilization of the CPU. 
- Modern systems utilize the concept of the paging system for executing many programs.
- However, this is what makes them prone to thrashing. 
- But this occurs only if the system does not have at present sufficient memory as required by the application or if the disk access time is too long. 

- Thrashing is also quite common in the communication systems where the conflicts concerning the internal bus access is common. 
- The order of magnitude or degree by which the latency and throughput of a system might degrade depends up on the algorithms and the configuration that is being used. 
- In systems making use of virtual memory systems, workloads and programs presenting insufficient locality of reference may lead to thrashing. 
- Thrashing occurs when the physical memory of the system is not able to contain in itself the workload or the program. 
- Thrashing can also be called as the constant data swapping.
- Older systems were low end computers i.e., the RAM they had was insufficient to be employed in modern usage patterns. 
- Thus, when their memory was increased they became noticeably faster. 
- This happened because of the availability of more memory which reduce the amount of swapping and thus increased the processing speed. 
- IBM system/ 370 (mainframe computer) faced this kind of situation. 
- In it a certain instruction consisted of an execute instruction pointing over to another move instruction. 
- Both of these instructions crossed the page boundary and also the source from which the data has to be moved and the destination where it was to be placed both crossed the page boundary. 
- Thus, this particular instruction altogether required 8 pages and that too at the same time in memory. 
- Now if the operating system allocated less than 8 pages, a page fault is sure to occur. 
- This page fault will lead to thrashing of all the attempts of restarting the failing instruction. 
- This may even reduce the CPU utilization to almost zero!

How can a system handle thrashing?

For resolving the problem of thrashing, the following things can be done:
1. Increasing the amount of main memory i.e., the RAM in the system. This is the best ever solution for this and will be helpful for a long term also.
2. Decreasing the number of programs to be executed by the system.
3. Replacing the programs that utilize heavy memory with their less memory utilizing equivalents.
4. Making improvements in the spatial locality.

- Thrashing can also occur in cache memory i.e., the faster storage space that is used for speeding up the data access. 
- Then it is called cache thrashing. 
- It occurs when the cache is accessed in a way that it leaves it of no benefit. 
When this happens many main memory locations compete with each other for getting the same cache lines that it turn leads to a large number of cache misses.


Friday, April 19, 2013

What is Paging? Why it is used?


- Paging is a very important concept for the computer operating systems required for managing the memory. 
- It is essentially a memory management scheme which is used for storing as well as retrieving data from the secondary memory devices.
- Under this scheme, the data is retrieved from the secondary storage devices and handed over to the operating systems. 
- The data is in the form of blocks all having the same size. 
- These data blocks are called as the pages. 
- In paging, for a process the physical address space can be kept as non–contiguous itself. 
- Paging is a very important concept for implementing the virtual memory in the operating systems designed for contemporary and general use. 
- This allows the disk storage to be used for the data that is not able to fit in to the RAM. 
- The main functions of the paging technique are carried out when a program attempts to access the pages that have no mapping to the physical RAM. 
- This situation is commonly known as the page fault. 
- In this situation, the OS comes to take control of the error. 
- This is done in a way that is invisible to the application. 

The operating system carries out the following tasks in paging:
Ø  Locates the data address in an auxiliary storage.
Ø Obtains a vacant page frame in the physical memory to be used for storing the data.
Ø  Loads the data requested by the application in to the page frame obtained in the previous step.
Ø  Make updates to the page table for showing the new data.
Ø Gives back the execution control to the program.This maintains a transparency. it again tries to execute the instruction because of which the fault occurred.

- If space is not available on RAM for storing all the requested data, then another page from RAM cannot be removed. 
- If all of the page frames are filled up, then a page frame can be obtained from the table which contains data that will be shortly emptied. 
- A page frame is said to become dirty if it is modified since its last read operation in to the RAM. 
- In such a case it has to be written back in to its original location in the drive before it is freed. 
- If this is not done, a fault will occur which will require obtaining an empty frame and reading the contents from drive in to this page. 
- The paging systems must be efficient so as to determine which frames are to be emptied. 
- Presently many page replacement algorithms have been designed for accomplishing this task. 
- Some of the mostly used for replacement are:
Ø  LRU or least recently used
Ø  FIFO or first in first out
Ø  LFU or least frequently used.

- To further increase responsiveness, paging systems may employ various strategies to predict which pages will be needed soon. 
- Such systems will attempt to load pages into main memory preemptively, before a program references them. 
- When demand paging is used, paging takes place only when some data request and not prior to it. 
- In a demand pager, execution of a program begins with none of the pages loaded in to the RAM. 


Friday, January 22, 2010

Demand Segmentation

Although demand paging is considered the most efficient virtual memory system, a significant amount of hardware is required to implement it. When this hardware is lacking, less efficient means are sometimes devised to provide virtual memory. A case in point is demand segmentation.
Operating system allocates memory in segments, rather than in pages. It keeps track of these segments through segment descriptors, which include information about the segment's size, protections, and location. A process does not need to have all its segments in memory to execute. Instead, the segment descriptor contains a valid bit for each segment to indicate whether the segment is currently in memory. If the segment is in memory, the access continues unhindered. If the segment is not in memory, a trap to the operating system occurs. Operating system then swaps out a segment to secondary storage, and brings in the entire requested segment. The interrupted instruction then continues.
To determine which segment to replace in case of segment fault, operating system uses another bit in the segment descriptor called an accessed bit. It is set whenever any byte in the segment is either read or written. A queue is kept containing an entry for each segment in memory. After every time slice, the operating system places at the head of the queue any segments with a set access bit.
It then clears all access bits. In this way, the queue stays ordered with the most recently used segments at the head.
Demand segmentation requires considerable overhead. Thus, demand segmentation is not an optimal means for making best use of the resources of a computer system.


Monday, January 11, 2010

Performance of Demand Paging

Advantages of Demand Paging :
* Only loads pages that are demanded by the executing process.
* As there is more space in main memory, more processes can be loaded reducing context switching time which utilizes large amounts of resources.
* Less loading latency occurs at program start-up, as less information is accessed from secondary storage and less information is brought into main memory.
* Does not need extra hardware support than what paging needs, since protection fault can be used to get page fault.

Disadvantages of Demand Paging :
* Individual programs face extra latency when they access a page for the first time. So demand paging may have lower performance than anticipatory paging algorithms such as pre-paging.
* Programs running on low-cost, low-power embedded systems may not have a memory management unit that supports page replacement.
* Memory management with page replacement algorithms becomes slightly more complex.
* Possible security risks, including vulnerability to timing attacks.

Performance Of Demand Paging :
Let p be the probability of a page fault (0<=p<=1). We would expect p to be close to zero i.e. there will be only few page faults. The effective access time is then :
effective access time = (1-p) * ma + p * page fault time
To compute the effective access time, we must know how much time is needed to service a page fault. A page fault causes the following sequence to occur :
- Trap to the operating system.
- Save the user registers and process state.
- Determine that the interrupt was a page fault.
- Check that the page reference was legal and determine the location of the page on the disk.
- Issue a read from the disk to a free frame.
- While waiting, allocate the CPU to some other user.
- Interrupt from the disk.
- Save the registers and process state for the other user.
- Determine that the interrupt was from the disk.
- Correct the page table and other tables to show that the desired page is now in memory.
- Wait for the CPU to be allocated to this process again.
- Restore the user registers, process state, and new page table, then resume interrupted instruction.


Facebook activity