Subscribe by Email


Showing posts with label Memory. Show all posts
Showing posts with label Memory. Show all posts

Tuesday, August 6, 2013

What is meant by an optimal route?

- For selecting a path or route, a routing metric has to be applied to a number of routes so as to select the best out of them. 
- This best route is called the optimal route with respect to the routing metric used. 
- This routing metric is computed with the help of the routing algorithms in computer networking.
- It consists of information such as network delay, hop count, network delay, load, MTU, path cost, communication cost, reliability and so on.
- Only the best or the optimal routes are stored in the routing tables that reside in the memory of the routers. 
- The other information is stored in either the topological or the link state databases. 
- There are many types of routing protocol and each of them has a routing metric specific to it. 
- Some external heuristic is required to be used by the multi-protocol routers for selecting between the routes determined using various routing protocols. 
For example, the administrative distance is the value that is attributed to all the routes in Cisco routers. 
- Here, smaller distances mean that the protocol is a reliable one. 
- Host specific routes to a certain device can be set up by the local network admin. 
- This will offer more control over the usage of the network along with better overall security and permission for testing. 
- This advantage comes handy especially when it is needed to debug the routing tables and the connections. 

In this article we discuss about the optimal routes. 
- With the growing popularity of the IP networks as the mission critical tools for business, the need for methods and techniques using which the network’s routing posture can be monitored is increasing.
- Many routing issues or even incorrect routing can lead to undesirable effects on the network such as downtime, flapping or performance degradation. 
- Route analytic are the techniques and tools that are used for monitoring the routing in a network. 

The performance of the network is measured using the following 2 factors:
  1. Throughput or the Quantity of service: This includes the amount of data that is transmitted and time it takes to transfer.
  2. Average packet delay or Quality of service: This includes the time taken by a packet to arrive at its destination and the response of the system to the commands entered by the user.
- There is always a constant battle between the fairness and optimality or we can say between quantity of service and quality of service. 
- For optimizing the throughput, the paths existing between the nodes have to be saturated and the response time from source point to destination point must be noticed. 

For finding the optimal routes, we have two types of algorithms namely:
  1. Adaptive Algorithms: These algorithms are meant for the networks in which the routes change in a dynamic manner. Here the information regarding the route to be followed is obtained at the run time itself from adjacent as well as the all other routers. The routes change whenever there is a change in the load, change in the topology and every delta T seconds.
  2. Non – adaptive algorithms: These algorithms the same routes cannot be followed every time. Therefore the measurements that were made for the previous condition cannot be used for the current condition. The routes thus obtained are called static routes and are computed at the boot time.

Finding optimal routes requires following the principle of optimality according to which the optimal path between an intermediate router and the destination router lies on the same route from the source to the destination route. 


Sunday, July 21, 2013

Comparison between Virtual Circuit and Datagram subnets

Difference #1:
- In virtual circuits the packets are allowed to contain in them the circuit number rather than storing the full address of the destination. 
- This reduces the requirement for a much larger memory and bandwidth. 
- This also makes it cheaper in cost. 
- On the other hand, the data-grams have to contain the full destination address rather than a single circuit number.
- This causes a significant overhead in the data-gram sub nets. 
- Also, this leads to wastage of the bandwidth. 
- All this implies that the data-gram sub nets are more costly when compared to the virtual circuits. 

Difference #2:
- A set up phase is required for the virtual circuits. 
- For establishing this phase a lot of resources are required along with a lot of time. 
- Data-gram sub net in contrast does not require establishment of set up phase. 
- Hence, there is no requirement of resources.

Difference #3:
- In virtual circuits, for indexing purpose the circuit numbers are used by the router. 
- These numbers are stored in a table and are used for finding out the destination of the packet. 
- This procedure is quite simple when compared with the one used by the data-gram sub nets. 
- The procedure used in data-gram sub nets for determining the destination of the packet is quite complex. 

Difference #4:
- Virtual circuits allow for reserving the resources in advance on the establishment of the resources.
- This has a great advantage which is that the congestion is avoided in the sub net. 
- However, in the data-gram sub nets, it is quite difficult to avoid congestion. 

Difference #5:
- If a crash occurs in a router, then it will lose its memory. 
- Even if it backs up after sometime, all the virtual circuits that pass via it must be aborted. 
- This is not a major problem in the case of the data-gram sub nets. 
- Here, if the router crashes, the only packets that will have to suffer will be the ones that were queued for that router at that instant of time. 

Difference #6:
- The virtual circuits can vanish as a result of the loss or fault on the current communication line.
- In data-gram sub nets, it is comparatively easy to compensate for the fault or loss on the communication line. 

Difference #7:
- In virtual circuits there is one more cause for the traffic congestion. 
- This cause in the use of the fixed routes for the transmission of the data packets throughout the network. 
- This also leads to the problem of unbalanced traffic. 
- In data gram sub nets the routers are given the responsibility of balancing the traffic over the entire traffic.
- This has been made possible because it is allowed to change the routers halfway between the connections. 

Difference #8:
- Virtual circuits are one way of implementing the connection-oriented services. 
- For various types of data gram sub nets, a number of protocols are defined by the internet protocol. 
- Internet protocol provides the data-gram service at the internet layer. 
- In contrast with the virtual circuits, data gram sub nets are connection-less service. 
- It is the best effort message delivery service but at the same time is very unreliable. 
- There are a number of high level protocols such as TCP that are dependent up on the data gram service of the internet protocol.
- This calls for additional functionality. 
- The data gram service of IP is even used by the UDP. 
- The fragments of a data gram might be referred to as the data packets. 
- The IP and UDP both provide unreliable services and this is why both of them are termed as data grams. 
- The fragments of TCP are referred to as TCP fragments to distinguish it from data-grams. 


Thursday, July 18, 2013

What is a routing algorithm in network layer?

About Routing
- The process of path selection in the network along which the data and the network traffic could be send is termed as routing. 
- Routing is a common process carried out in a number of networks such as the transportation networks, telephone networks (in circuit switching), electronic data networks (for example, internet). 
- The main purpose of routing is to direct the packet forwarding from source to its destination via the intermediate nodes. 
- These nodes are nothing but hardware devices namely gateways, bridges, switches, firewalls, routers and so on. 
- A general purpose system which does not have any of these specialized routing components can also participate in routing but only to a limited extent.

But how to know where the packets have to be routed? 
- This information about the source and the destination address is found in a table called the routing table which is stored in the memory of the routers. 
These tables store the records of routers to a number of destinations over the network. 
- Therefore, construction of the routing tables is also an important part of efficient routing process. 
- Routing algorithms are used to construct this table and for selecting the optimal path or route to a particular destination. 

- A majority of the routing algorithms are based on single path routing techniques while few others use multi-path routing techniques. 
- This allows for the use of other alternative paths if one is not available. 
- In some, the algorithm may discover equal or overlapping routes. 
- In such cases the following 3 basis are considered for deciding up on which route is to be used:
  1. Administrative distance: This basis is valid when different routing protocols are being used. It prefers a lower distance.
  2. Metric: This basis is valid when only one routing protocol is being used throughout the networks. It prefers a low cost route.
  3. Prefix-length: This basis does not depends on whether the same protocol is being used or there are many different protocols involved. It prefers the longer subnet masks.
Types of Routing Algorithms

Distance Vector Algorithms: 
- In these algorithms, the basic algorithm used is the “Bellman – Ford algorithm”. 
- In this approach, a cost number is assigned to all the links that exist between the nodes of a network.
- The information is send by the links from point A to point B through the route that results in the lowest total cost.
- The total cost is the sum of the costs of all the individual links in the route. 
The manner of operation of this algorithm is quite simple.
- It checks from its immediate neighboring nodes that can be reached with the minimum cost and proceeds.

Link-state Algorithms: 
- This algorithm works based up on the graphical map of the network which is supplied as input to it. 
- For producing this map, each of the nodes assembles the information regarding to which all nodes it can connect to in the network. 
- Then the router can itself determine which path has the lowest cost and proceed accordingly. 
- The path is selected using standard path selection algorithms such as the Dijkstra’s algorithm. 
- This algorithm results in a tree graph whose root is the current node. 
- This tree is then used for the construction of the routing tables.

Optimized link state Routing Algorithm: 
- This is the algorithm that has been optimized to be used in the mobile ad-hoc networks. 
- This algorithm is often abbreviated to OLSR (optimized link state routing). 
This algorithm is proactive and makes used of topology control messages for discovering and disseminating the information of the link’s state via mobile ad-hoc network. 


Tuesday, June 25, 2013

Explain about demand paging and page replacements

These are two very important concepts of memory management strategies in the computer operating systems namely demand paging and paging replacements. 

About Demand Paging
- Demand paging is just the opposite concept of the anticipatory paging. 
Demand paging is actually a memory management strategy developed for managing the virtual memory.
- The operating system that makes use of demand paging technique, a copy of the disk page is made and kept in the physical memory whenever a request is made for it i.e., whenever a page fault occurs. 
- It is obvious that the execution of a process starts with none of its page loaded in to the main memory and follows by a number of page faults occurring one after the other until all of its required pages have been loaded in to the main memory. 
- Demand paging comes under the category of the lazy loading techniques. 
This strategy follows that only if the process in execution demands a page, then only it should be brought in to the main memory. 
- That’s why the strategy has been named as demand paging. Sometimes it is even called as the lazy evaluation. 
- Page table implementation is required for using the demand paging technique.
- The purpose of this table is to map the physical memory to the logical memory. 
- This table uses a bit wise operator for marking a page as valid or invalid. 

The following steps are carried out whenever a process demands for a page:
  1. An attempt is made for accessing the page.
  2. If page is present in the memory the usual instructions are followed.
  3. If page is not there i.e., is invalid then a page fault is generated.
  4. Memory reference to a location in the virtual memory is checked if it is valid or not. If it’s an illegal memory access then the process is terminated. If not the requested page has to be paged in.
  5. The disk operations are scheduled for reading the requested page in to the physical memory.
  6. Restarting the instruction that raised the page fault trap.
- The nature of this strategy is itself of great advantage. 
- Upon availability of more space in the physical memory, it allows execution of many processes leading to a decrease in the context switching time.
- At the time of program start up, less latency occurs during loading. 
- This is because the inflow and outflow of the data between main memory and secondary memory is very less.


About Page Replacement
- When less number of real memory frames is available, it leads to invoking a page stealer. 
- This stealer searches through the PFT (page frame table) for pages to steal. 
This table stores references to the pages which are required and modified. 
- If the requested page is found by the page stealer, it does not steal it but the reference flag is reset for that page. 
- So in the pass when the page stealer comes across this page, it steals this page. 
- Note that in this pass the page was flagged as un-referenced. 
- Any change made to the page is indicated by means of the modify flag.
- If the modify flag of the page to be stolen is set, then a page out call has to be made before the page stealer does its work. 
- Thus, the pages that form a part of the currently executing segments are written to so called paging space and the persisting segments are in turn written to the disk. 
- The page replacement is carried by the algorithms called the page replacement algorithms. 
- Besides this, these also keep a track of the faults. 


Monday, June 24, 2013

Explain the page replacement algorithms - FIFO, LRU, and Optimal

- Paging is used by most of the computer operating systems for the purpose of virtual memory management. 
- Whenever a page fault occurs, some pages are swapped in and swapped out. Who decides which pages are to be replaced and how? 
- This purpose is served by the page replacement algorithms. 
- Page replacement algorithms only decide which pages are to be page out or written to the disk when a page of the memory has to be allocated. 
- Paging takes place only upon the occurrence of a page fault.
- In such situations a free cannot suffice because either one is not available or because the number of available pages is less than threshold. 
- If a previously paged out page is reference again, then it has to be read in from the disk again. 
- But for doing this, the operating system has to wait for the completion of the input/ output operation. 
- The quality of a page replacement algorithm is denoted by the time it takes for a page in. 
- The lesser it is, the better the algorithm is. 
- The information about the access to page as provided by the hardware is studied by the page replacement algorithm and then it decides which pages should be replaced so that the number of page faults can be minimized. 

In this article we shall see about some of the page replacement algorithms.

FIFO (first – in, first – out): 
- This one being the simplest of all the page replacement algorithms has the lowest overhead and works by book – keeping in place of the OS. 
- All pages are stored in a queue in the memory by the operating system.
- The ones that have recently arrived are kept at the back while the old ones stand at the front end of the queue.
- While making replacement, the oldest page is selected and replaced. 
- Even though this replacement algorithm is cheap as well as intuitive, practically it does not perform well. 
- Therefore, it is rarely used in its original form. 
- VAX/ VMS operating systems make use of the FIFO replacement algorithm after making some modifications. 
- If a limited number of entries are skipped you get a partial second chance. 

Least recently used (LRU): 
- This replacement algorithm bears resemblance in name with the NRU. 
However, difference is there which is that this algorithm follows that the page usage is tracked for a certain period of time. 
- It is based on the idea that the pages being used many times in current set of instructions will be used heavily in the next set of the instructions also. 
- Near optimal performance is provided by LRU algorithm in the theory however in practical it is quite expensive to be implemented. 
- For reducing the costs of the implementation of this algorithm, few implementation methods have been devised. 
- Out of these the linked list method proves to be the costliest. 
- This algorithm is so expensive because it involves moving the items about the memory which is indeed a very time consuming task. 
- There is another method that requires hardware support.

Optimal page replacement algorithm: 
- This is also known as the clairvoyant replacement algorithm. 
- The page that has to be swapped in, it is placed in place of the page which according to the operating system will be used in future after a very long time. - In practical, this algorithm is impossible to be implemented in case of the general purpose OS because approximate time when a page will be used is difficult to be predicted. 


Friday, June 21, 2013

Explain about the Paged Memory and Segmentation techniques?

Paging and segmentation, both are memory management techniques. 

What is Paging?

- This technique has been designed so that the system can store or retrieve data from the virtual memory or secondary memory of the system to be loaded in the main memory and used. 
- In this scheme, the data from the secondary memory is retrieved by the operating system in blocks of same size commonly known as the paging. 
- This is why the technique has been called the paging memory – management scheme. 
- This memory management scheme has a major advantage over the segmentation scheme. 
- The advantage is that non-contiguous address spaces are allowed. 
- In segmentation, non-contiguous physical address spaces are not allowed. 
Before the paging actually came in to use, the whole program had to be fitted in to the contiguous memory space by the systems. 
- This in turn led to a number of issues related to fragmentation and storage. 
Paging is very important for the implementation of the virtual memory in many of the operating systems that are general purpose. 
- With the help of paging memory management technique, the data that cannot be fitted in to the physical memory i.e., RAM can be easily used. 
- Paging actually comes in to play whenever a program makes an attempt for accessing the pages that have not been presently mapped to the main memory (RAM). 
- Such situation is termed as the page fault. 
- At this point the control is handed over to the operating system for handling the page fault.
- This is done in a way that it is not visible to the interrupt raising program. 

The operating system has to carry out the following instructions:
  1. Determining the location of the requested data from the auxiliary storage.
  2. Obtaining a page frame in the main memory that is empty to be used for storing the requested data.
  3. Loading the data requested in to the empty page obtained above.
  4. Making updates to the page table so that new data is only available.
  5. Returning the control interrupting program and retrying to execute the same instruction that caused the fault.

What is Segmentation?

- This memory management technique involves dividing the main memory in to various sections or segments.
- In the system that makes use of this management technique, a value identifying the segment and its offset is contained in the reference to that memory location. 
- Object files that are produced during the compilation of the programs make use of the segments when they have to be linked together to form an image of the program and this image has to be loaded in to the memory.  
- For different program modules, different segments might be created. 
- Some programs may even share some of the segments.
- In one way, memory protection is implemented by means of memory segmentation only.
- Paging and segmentation can be combined together for memory protection. 
- The size of memory segment is not always fixed and can be as small as a byte. 
- Natural divisions such as the data tables or the individual routines are represented by the segments.
This is to make the segmentation visible to the programmer. 
- With every segment, a set of permissions and length is associated. 
- A segment can be referred to by the process only in a way that is permitted by this set of permissions. 
- If this is not done, a segmentation fault is raised by the operating system. 
Segments also consist of a flag that indicates the presence of the segment in the main memory of the system. 


Thursday, June 20, 2013

Explain the single and multiple partition techniques?

There are a number of allocation techniques available and all have different properties and allocate memory based on different principles. One prominent type of allocation is the partitioned allocation. 
- In partitioned allocation the primary or the main memory of the system is divided into a number of contiguous memory blocks which are commonly known as the memory partitions. 
- Each of these partitions consists of all the information that might be required for carrying out a specific task. 
- The task of allocating these memory partitions to various jobs and processes and de-allocating them after use is the duty of the memory management unit.  
But partitioned allocation cannot be carried out by the help of software alone. 
It requires some hardware support. 
- This support prevents interference of the various jobs in to each other and with the operating system as well. 
- For example, a lock and key technique was used by the IBM system/ 360. 
- Some other systems made use of the registers called the base and bound registers containing the partition limits and these were also used for flagging if any invalid access was made. 
- Limits register was used by the UNIVAC 1108 having separate base and bound data and instructions. 
- A technique called the memory interleaving was used by this system for placing so called I banks and d banks in different memory modules. 

Partitions are of two types namely:
Ø  Static partitions: These are defined at the boot time or IPL (initial program load) or sometimes by the computer operator. An example of system using static partitions is IBM system/360 operating system multi-programming with MFT (fixed number of tasks).
Ø  Dynamic partitions: These are created automatically for the specified job. An example is of the IBM system/ 360 operating system multi-programming with MVT (variable number of tasks).

- The hardware typed memory such as the base and bound registers (GE – 635, PDP – 10 etc.), Burroughs corporation B5500 etc. is used for relocating the memory partitions. 
- The partitions that can be relocated can be compacted to form larger contiguous memory chunks in the main memory. 
- Some systems allow for swapping out the partitions to the secondary storage and in turn to some additional memory.
The partitioned allocation offers two types of allocation techniques namely:
  1. Single partition techniques
  2. Multiple partition techniques

- Single partition techniques are the ones that are used for the single time sharing partition for swapping in and out the memory partitions. 
- These techniques are used by the IBM’s TSO (time sharing option). 
- The multiple partition techniques are used in the multiple time sharing partition. 
- In DOS systems when the disk is partitioned, each of the memory partitions act as if it is an individual disk drive. 
- Partitioning is useful for the systems where there are more than one operating system. 
- Partitioning techniques are meant for increasing the efficiency of the disk. 
Hard and soft partitioning is used on the apple Macintosh computers. 
- The creation, relocation and deletion of the memory partitions can be harmful for the data. 
- That’s why it is good to have back up of the data stored on your system. 
Several issues have to be considered if you want to install more than one operating system on your computer. 
- Day by day disks are becoming less expensive and bigger. 
- You can go for separate disks for storing data and installing Oss. 


Saturday, June 15, 2013

What is Process State Diagram?

In the systems where multiple processors or multitasking is involved, a process has to go through a number of states. In this article we shall discuss about these states. 
The kernel of the operating system may not recognize these states distinctly but still for the understanding of how the processes are executed they act as useful abstractions. 
These various states can be looked up in a process state diagram for a systematic view. This diagram shows the transitions of the process between various states with arrows. Processes can be stored both in the secondary or virtual memory and in the main memory as per the situation.

Process States

- The primary process states occur in all types of the systems. 
- Processes in these states are usually stored in the main memory. 
Basically there are 5 major states of any process as discussed below:

Ø  Created: 
- It is also known as the ‘new’ state. 
- A process occupies this state up on it creation. 
- While waiting for being admitted to the next ready state this process lies in this state. 
- The admission scheduler will decide whether to admit the process to next state or to delay it on a short or long term. 
- However, this admission is approved in an automatic way in most of the desktop computers. 
- But in the systems with real time operating systems this is not true. 
- Here, the admission might be delayed by a certain amount of time.
- If too many states are admitted to the ready state in a real time operating system, condition of over contention and over saturation might occur disabling the system to meet its deadlines.

Ø  Ready or Waiting: 
- This state is taken up a process when it has been loaded in to the physical memory of the system and is waiting to be executed by the processor or precisely waiting to be context switched by the dispatcher. 
- At any instant of time there might be a number of processes waiting for their execution. 
- Here the processes have to wait in a queue called the run queue out of which only one process will be taken up by a processor. 
- Processes that are waiting for obtaining input from some event are not put in to this ready queue.

Ø  Running: 
- When a process is selected by the CPU for execution, its state is changed to running. 
- One of the processors executes the instructions of the process one by one. 
Only one process can be run by the process at a time.

Ø Blocked: 
- When a process is blocked because of some event such as I/O operations is put in to the blocked state. 
- Another reason for a process being a blocked state can be its running out of the CPU time allocated to it.

Ø  Terminated: 
- Termination of a process may occur when either its execution is complete or it has been explicitly killed.
- This state is called the terminated or halted. 
- This process is called a zombie process if after coming in the terminated state it is not removed from the main memory. 

There are two additional states for supporting the virtual memory. In these states the process is usually stored in the secondary memory of the system:

Ø Swapped out or Waiting: 
- A process is said to be swapped out when it is removed from the primary memory and placed in the secondary memory. 
- This is done by the mid - term scheduler. 
- After this the state of this process changes to waiting.  

Ø  Swapped out and Blocked:
- In some cases, the processes that were in blocked state might be swapped out. 
- The same process might be again swapped in providing the conditions remain the same.



Friday, June 14, 2013

Explain the methods for free space management? – Part 2

- Managing the free space is easy only when the space that has to be managed is divided in to units of fixed size. 
- If this is the case, the list of these fixed size units can be kept. 
- The first entry can be returned if it is requested by the client. 
- Managing free space gets difficult when the space to be managed consists of units of variable sizes. 
- This is the case with the memory allocation library at the user level. 
- This is also the case in the physical memory where the segmentation is used for the implementation of the virtual memory. 
- In such cases, external fragmentation is the main problem. 
- This leads to the splitting up of the disk space in to pieces of variable size. 
The next coming requests might fail because of unavailability of contiguous free space. 
- For example, the request might fail even if 20 bytes are available and the request requires only 15 bytes because this 20 bytes space is non-contiguous. 
Thus, the main problem is how the free space should be managed while satisfying the variable sized variables. 
How these strategies can manage free space while at the same time keeping the fragmentation in control.

Low level Mechanisms: 
- Most of the allocator use some common mechanisms as these: coalescing and splitting.
- Here, the free list consists of a set of elements describing about all the free spaces available in the heap. 
- Once a pointer to a space is handed over to the program, the determination of the pointers to this space becomes somewhat difficult. 
- These pointers are stored either in the registers or in the variables at some point of execution. 
- However, this is not the case of garbage collected and strongly typed languages as a measure for enabling compaction for combating fragmentation. - Suppose the program makes a request for a single byte of memory.
- In such a case the action performed by the allocator is called splitting. 
- Here a free memory chunk is searched for and split in to two. 
- The first one is returned to the calling request and the second one stays in the list itself. 
- This approach is used in the allocators where small requests are made requesting space that is of size smaller than the chunk. 
- Most of the allocators use a corollary mechanism called the coalescing of the free space. 
- Suppose a small heap is given and an application calls a function to obtain some memory. 
- This function returns the space available in the middle of the heap. 
- All these strategies are based up on simple policies. 
- An ideal allocator is the one that both minimizes fragmentation and is fast. 
But since it is possible for the stream of free requests and allocation to be arbitrary, any strategy would go wrong if wrong inputs are given. 
- Thus, the best approach cannot be described. 
There are 4 major approaches in this regard:

1. Best fit: 
- Simplest and first searches the list for free chunks of memory that might be bigger than the size requested. 
- The smallest one in the searched ones is returned and this is known as the best fitting chunk or smallest fitting too. 
- One pass is enough for finding the appropriate block to be returned.

2. Worst fit: 
- This one is just the opposite of the best fit. 
- It looks for the largest chunk and returns the amount requested while keeping the remaining memory space.

3. First fit: 
- This one also looks for the first bigger block and out of it allocates the amount requested by the thread.

4. Next fit: 
- Here, one extra pointer is kept at the location where the last search was done. 


Facebook activity