Subscribe by Email


Showing posts with label Input. Show all posts
Showing posts with label Input. Show all posts

Tuesday, August 20, 2013

When is a situation called as congestion?

- Network congestion is quite a common problem in the queuing theory and data networking. 
- Sometimes, the data carried by a node or a link is so much that its QoS (quality of service) starts deteriorating. 
- This situation or problem is known as the network congestion or simply congestion. 
This problem has the following two typical effects:
Ø  Queuing delay
Ø  Packet loss and
Ø  Blocking of the new connections


- The last two effects lead to two other problems. 
- As the offered load increases by the increments, either the throughput of the network is actually reduced or the throughput increases by very small amounts. 
- Aggressive re-transmissions are used by the network protocols for compensating for the packet loss. 
- The network protocols thus tend to maintain a state of network congestion for the system even if the actual initial load is too less that it cannot cause the problem of network congestion. 
- Thus, two stable states are exhibited by the networks that use these protocols under similar load levels. 
- The stable state in which the throughput is low is called the congestive collapse. 
- Congestive collapse is also called congestion collapse.
- In this condition, the switched computer network that can be reached by a packet when because of congestion there is no or little communication happening.
- In such a situation even if a little communication happens it is of no use. 
There are certain points in the network called the choke points where the congestion usually occurs.
- At these points, the outgoing bandwidth is lesser than the incoming traffic. 
Choke points are usually the points which connect the wide area network and a local area network. 
- When a network falls in such a condition, it is said to be in a stable state. 
- In this state, the demand for the traffic is high but the useful throughput is quite less.
- Also, the levels of packet delay are quite high. 
- The quality of service gets extremely bad and the routers cause the packet loss since their output queues are full and they discard the packets. 
- The problem of the network congestion was identified in the year of 1984. 
The problem first came in to the scenario when the backbone of the NSF net phase dropped 3 times of its actual capacity. 
- This problem continued to occur until the Van Jacobson’s congestion control method was implemented at the end nodes.

Let us now see what is the cause of this problem? 
- When the number of packets being set to a router exceeds its packet handling capacity, many packets are discarded by the routers that are intermediate. 
- These routers expect the re-transmission of the discarded information. 
- Earlier, the re-transmission behavior of the TCP implementations was very bad. 
- Whenever a packet was lost, the extra packets were sent in by the end points, thus repeating the lost information. 
- But this doubled the data rate. 
- This is just the opposite of what routine should be carried out during the congestion problem. 
- The entire network is thus pushed in a state of the congestive collapse resulting in a huge loss of packets and reducing the throughput of the network. 
Congestion control as well as congestion avoidance techniques are used by the networks of modern era for avoiding the congestive collapse problem. 
Various congestion control algorithms are available that can be implemented for avoiding the problem of network congestion. 
- There are various criteria based up on which these congestion control algorithms are classified such as amount of feedback, deploy-ability and so on. 


Wednesday, August 7, 2013

Difference between adaptive and non - adaptive algorithms?

- Routing is the process of sending information from one point of network to another. 
- The originating point is called the source and the last point is called the destination. 
- Through the way a number of intermediate nodes might or might not be encountered. 
- Routing is sometimes compared with bridging. 
- Both of these accomplish the same purpose for the casual observer. But it is not so. 
- The basic difference between the two is that the routing is done at the layer 3 i.e., the network layer of the OSI model and the bridging takes place at the layer 2 i.e., data link layer of the OSI model. 
- Because of this distinction, the input supplied to the two processes is different and thus the task of path selection occurs in different ways. 
The routing algorithm is included as a part of the network layer software. 
- The primary responsibility of this software is to decide on which line the incoming traffic must be forwarded i.e., what will be the next node. 
- Certain metrics are used by the routing protocols for the evaluation of the path that is most appropriate for the transmission of a packet. 
- These metrics include reliability, path bandwidth, current load, delay and so on. 
- These metrics help in determining the optimal path towards a destination. 
Routing tables are created and maintained by the routing algorithms in order to aid the path determination process.
- The tables will contain what routing information is entirely based up on the routing algorithm that is being used. 
- The routing tables are filled by a variety of information by the routing algorithms. 
- If the internal subnet used is the datagram subnet, then for every datagram that arrives, a new decision has to be taken since the routes keep changing in this case after every transmission.
- On the other hand in virtual circuit subnet, all the decisions are taken with the setting up of the virtual circuit. 
- Once the connection or the links are established, the same path is followed by all the packets. 

The routing algorithms can be classified in to two major categories namely:
  1. Non – adaptive algorithms and
  2. Adaptive algorithms
- Another name for non – adaptive algorithms is the static algorithms. 
- Here the computation regarding the various routes is done in advance and the same routes are followed by all the packets. 
- The adaptive algorithms are better known as the dynamic algorithms. 
- In this type of algorithms, the routes are not computed in advance, rather the route is decided up on the arrival of a particular packet depending up on the traffic and the topology of the network. 

We have three different types of algorithms under the category of non – adaptive algorithms as mentioned below:
  1. Shortest path routing: This algorithm makes use of the Dijkstra’s algorithm for computing the shortest path where nodes and communication links are represented by vertices and edges of the graph respectively.
  2. Flooding: Here, the arriving data packet is transmitted on all the outgoing lines save the one on which it arrived. Its selective flooding variation is commonly used.
  3. Flow based routing: This algorithm takes in to consideration the present flow of the network before deciding on which line the packet must be transmitted.
And following are some of the adaptive algorithms:

  1. Distance vector routing: It requires knowledge about the whole network and is associated with the count  to infinity problem.
  2. Link state routing: It requires knowledge about neighborhood.
  3. Hierarchical routing: It is used for very large networks.
  4. Optimized link state routing: It is used for mobile hosts. 


Monday, August 5, 2013

What is optimality principle?

A network consists of nodes which require communicating with other on various grounds. This communication is established via communication channels that exist between them. The communication involves data transfers. In a network a node may or may not have a link with every other node in the network. 
Applications that require communicating over a network include:
1. Telecommunication network applications such as POTS/ PSTN, local area networks (LANs), internet, mobile phone networks and so on.
2. Distributed system applications
3. Parallel system applications

- As we mentioned above, each and every node might not be linked with every other nodes since for doing so a lot of wires and cables are required which will the whole network more complicated. 
- Therefore, we bring in the concept of the intermediate nodes. 
- The data transmitted by the source node is forwarded to the destination by these intermediate nodes. 
Now the problem that arises is which path or route will be the best to use i.e., the path with the least cost. 
- This is determined using the routing process. 
- The best path thus obtained is called the optimal route. 
- Today, we have a number of algorithms available for determining the optimal path. 

These algorithms have been classified in to two major types:
  1. Non – adaptive or static algorithms
  2. Adaptive or dynamic algorithms

Concept of Optimality Principle

- This is the principle followed while determining the optimal router between the two routes. 
The general statement of the principle of optimality is stated below:
“An optimal policy has the property that whatever the initial state and initial decision are, the remaining decision must constitute an optimal policy with regard to the state resulting from the first decision.”

- This means if P is an optimal state that results in another state say Q, and then the portion of the original from that state to this state i.e., from P to Q must be optimum. 
- This only means the optimality of the part of the optimal policy is preserved. - The initial state and the final state are the most important parts of the optimum. 
- Consider an example, suppose we have problem with 3 inputs and 26 states. - Here, the state is associated with the optimum and the total cost is associated with the optimum policy.
- If brute force method is used for 3 inputs and 100 stages we have the total number of computations as 3100
- That means for solving this problem, a super computer is required.
- Therefore, the approach used for solving this problem is a parallel processing approach. 
- Here, for the each state the least step is computed and stored during the programming. 
- This reduces the number of possibilities and hence reducing the amount of computation.
- The problems become complex if the initial and the final states are undefined. - It is necessary for the problem to follow the principle of optimality in order to use the dynamic programming. 
- This implies that whatever the state may be, the decisions that follow must be optimal in regard with the state obtained from the previous decision. 
- This property is found in combinatorial problems but since they use a lot of time and memory, this method is inefficient for them. 
- These problems can be solved efficiently if some sort of best first search and pruning technique is applied.

- In regard to the routing in networks, it follows from the optimality principle if a router B lies between router A and C which lie on an optimal path, then the path between the router B and C is also an optimal path and lies on the same path. 
- Sink tree is formed as a result of all optimal routes which is the ultimate goal of all the routing algorithms.


Thursday, July 18, 2013

What is a routing algorithm in network layer?

About Routing
- The process of path selection in the network along which the data and the network traffic could be send is termed as routing. 
- Routing is a common process carried out in a number of networks such as the transportation networks, telephone networks (in circuit switching), electronic data networks (for example, internet). 
- The main purpose of routing is to direct the packet forwarding from source to its destination via the intermediate nodes. 
- These nodes are nothing but hardware devices namely gateways, bridges, switches, firewalls, routers and so on. 
- A general purpose system which does not have any of these specialized routing components can also participate in routing but only to a limited extent.

But how to know where the packets have to be routed? 
- This information about the source and the destination address is found in a table called the routing table which is stored in the memory of the routers. 
These tables store the records of routers to a number of destinations over the network. 
- Therefore, construction of the routing tables is also an important part of efficient routing process. 
- Routing algorithms are used to construct this table and for selecting the optimal path or route to a particular destination. 

- A majority of the routing algorithms are based on single path routing techniques while few others use multi-path routing techniques. 
- This allows for the use of other alternative paths if one is not available. 
- In some, the algorithm may discover equal or overlapping routes. 
- In such cases the following 3 basis are considered for deciding up on which route is to be used:
  1. Administrative distance: This basis is valid when different routing protocols are being used. It prefers a lower distance.
  2. Metric: This basis is valid when only one routing protocol is being used throughout the networks. It prefers a low cost route.
  3. Prefix-length: This basis does not depends on whether the same protocol is being used or there are many different protocols involved. It prefers the longer subnet masks.
Types of Routing Algorithms

Distance Vector Algorithms: 
- In these algorithms, the basic algorithm used is the “Bellman – Ford algorithm”. 
- In this approach, a cost number is assigned to all the links that exist between the nodes of a network.
- The information is send by the links from point A to point B through the route that results in the lowest total cost.
- The total cost is the sum of the costs of all the individual links in the route. 
The manner of operation of this algorithm is quite simple.
- It checks from its immediate neighboring nodes that can be reached with the minimum cost and proceeds.

Link-state Algorithms: 
- This algorithm works based up on the graphical map of the network which is supplied as input to it. 
- For producing this map, each of the nodes assembles the information regarding to which all nodes it can connect to in the network. 
- Then the router can itself determine which path has the lowest cost and proceed accordingly. 
- The path is selected using standard path selection algorithms such as the Dijkstra’s algorithm. 
- This algorithm results in a tree graph whose root is the current node. 
- This tree is then used for the construction of the routing tables.

Optimized link state Routing Algorithm: 
- This is the algorithm that has been optimized to be used in the mobile ad-hoc networks. 
- This algorithm is often abbreviated to OLSR (optimized link state routing). 
This algorithm is proactive and makes used of topology control messages for discovering and disseminating the information of the link’s state via mobile ad-hoc network. 


Sunday, July 14, 2013

What is Polling?

- Polling is often referred to as the polled operation.
- When the statuses of the external devices are actively sampled by a client program just like a synchronous activity is referred to as the polling. 
- The common use of the polling is in the input and output operations. 
- In rare cases, polling is also called as the software driven I/O or just simply as polled I/O. 
- As and when required, polling is also carried out with the busy waiting synonymous. 
- Polling is then referred to as the busy–wait polling. 
- In this case whenever it is required to carry out an input/ output operation, the system just checks the status of the device required for fulfilling this operation until it is idle. 
- When it becomes idle it is accessed by the I/O operation. 
- Such polling may also refer to a state in which the status of the device is checked again and again for accessing it if idle. 
- If the device is occupied, the system is forced to return to some other pending task. 
- In this case the CPU time is wasted less when compared to what happens in busy waiting. 
- However, this is not a better alternative to interrupt driven I/O polling. 
- In single purpose systems that are too simple, using busy-wait polling is perfectly fine if the system cannot take any action until the I/O device has been accessed. 
- But traditionally, the polling was thought to be a consequence of the operating systems and simple hardware that do not support multitasking. 
- The polling works intimately with the low level hardware usually. 
- For example, a parallel printer port can be polled for checking whether or not it is ready for printing another character. 
- This involves just the examination of a bit. 
- The bit to be examined represents the high or low voltage stage of the single wire in the cable of the printer during the time of reading. 
- The I/O instruction by which this byte is read is also responsible for transferring the voltage state directly to the eight flip flops or circuits. 
- These 8 flip flops together constitute one byte of a register of CPU. 

Polling also has a number of disadvantages. 
- One is that there is limited time for servicing the I/O devices. 
- Polling has to be done within this time period only. 
- But in some cases there are many devices to be checked which cause the polling time to exceed the given limit. 
- The host keeps on hitting the busy bit until the device becomes idle or clear. 
When the device is idle, the state is written in to the command register and also in the data out register. 
- The command ready bit is set to 1. 
- The controller sets the busy bit once it knows that the command ready bit has been set.  
- After reading from the command register, the controller carries out the required I/O operation on the device. 
- On the other hand, if the read bit has been set to one, the controller loads the device data in to the data in register. 
- This data is further read by the host. 
- Once the whole action has been completed, the command ready bit is cleared by the controller. 
- The error bit is also cleared for showing that the operation has been completed successfully. 
- At the end the busy bit is also set.
- Polling can be seen in the terms of master slave scenario where the master sends inquiring about the working status slave devices i.e., whether they are clear or engaged. 


Thursday, June 27, 2013

What is the difference between a passive star and an active repeater in fiber optic network?

There are two important components of a fiber optic network namely passive star coupler and active repeaters. 

Passive Star in Fiber Optic Network
- Passive star couplers are single mode fiber optic couplers with reflective properties.  
- These couplers are used for optical local area networking at very high speeds. 
- These couplers are made from very simple components such as mirrors and 3 db couplers. 
- Besides this, these star couplers save a lot of optical fiber when compared to its trans-missive counterpart. 
- They are free of any multi-paths so as to avoid any interference. 
- A fiber optic network may consist of any number of passive star couplers and each of them is capable of connecting a number of users. 
- The input and output from every passive star coupler is given to the output and input of an active coupler. 
- The round trip transmission tile is stored by the active star coupler. 
- When it receives a signal from a passive star coupler, it stops the output to that coupler for the duration of the signal.
- It also inhibits the incoming data from all the other passive star couplers for the round trip transmission delay plus signal duration. 
- The purpose of a star coupler is to take one input signal and then splitting it in to a number of output signals. 
- In telecommunications industry and fiber optics communication, this coupler is used in network applications being a passive optical device. 
- If an input signal is introduced to one of the input ports, it is distributed to all of the output ports of the coupler. 
- As per the construction of the passive star coupler, the number of ports it will have is given by the power of 2. 
- For example, in a two port coupler or in a directional coupler or splitter, there are 2 input ports and 2 output ports.
- In a four port coupler, there are 4 i/p ports and 4 o/p ports and so on. 
- The digital equipment corporation also sold a device by the name of star coupler which was used for interconnecting the links and computers through coaxial cable instead of using optical fibers. 

Active Repeater in Fiber Optic Network 
- Active repeater is an important telecommunications device used for re transmitting the signal it receives to a higher level and with higher basically to the other side of an obstacle so that long distances can be covered. 
- Repeater is an electro-mechanical device that helps in regenerating the telegraphy signals. 
- It may be defined as an analog device for amplifying the input signal, reshaping it, re-timing it for re-transmission. 
- A re-generator is a repeater that can perform the re-timing operation. 
Repeaters just tend to amplify the physical signal without interpreting the data transmitted by the signal. 
- The 1st layer i.e., the physical layer is where the repeaters operate. 
Repeaters are employed for boosting the signals in optical fiber lines as well as in twisted pair and coaxial cables. 
- When a signal travels through a channel, it gets attenuated with the distance and time because of the energy loss (dielectric losses, conductor resistance etc.). 
- When light travels in optical fibers, it scattered and absorbed and hence is attenuated. 
- Therefore, in long fiber lines, repeaters are installed at proper intervals for regenerating and strengthening the signal. 
Repeater in optical communication performs the following functions:
Ø  Takes the input signal
Ø  Converts it in to electrical signal
Ø  Regenerates it.
Ø  Converts it in to optical signal
Ø  Re-transmits it

- These repeaters are usually employed in submarine as well as transcontinental communication cables as the loss is unacceptable in these cases.  


Sunday, June 23, 2013

Explain the various File Operations

A number of operations can be carried out on a file. However, there are 6 basic file operations. As we know a file is an ADT or abstract data type. Certain operations need to be considered for defining a file. Operating systems makes calls to these operations. 

Following are the six basic operations:

1. Creation of a file: 
- This operation involves two steps. 
- Firstly, a sufficient space has to be found for storing the file in the file system. - Secondly, for this new file an entry should be made in the directory.

2. Writing to a file: 
- For writing data to a file, a system call has to be made with name and the data to be written as its arguments. 
- A writer pointer is kept by the system at the location in the file where the next write operation is to be carried out. 
- This pointer is updated whenever a write operation occurs.

3. Reading from a file: 
- Just like the write operation, in order to read information from a file, a system call has to be generated along with the name of the file and the location of the content to be read as its arguments. 
- Here, instead of a write pointer there is a read pointer that will indicate the location where the next read operation is to take place. 
- The location at which the current operation is being carried out is kept as a “per – process current – file – position” pointer since the process is either writing to or reading from the file. 
- The same pointer can be used by both the read and write operations in order to reduce the complexity of the system as well as for saving space.

4. Re-positioning within a file: 
- System carries out search in the directory looking for the appropriate entry. 
When found, the current file position pointer is re-pointed to this position. 
This file operation does not require carrying out any input or output operation in actual. 
- Another name for this file operation is the file seek.

5. Deletion of a file: 
- For deletion of the file, the system searches through the directory to find the appropriate entry. 
- When found, the space held by this file is released and the entry in the directory is destroyed so that this space can be reused by other files.  

6. Truncating a file: 
- Sometimes you may require deleting only the contents of a file while keeping it attributes. 
- Deleting the file and recreating it is not an efficient solution. 
- This file operation lets you to erase the contents of the file but save its attributes.
- But here the length attribute of the file will be changed to zero after truncation. 
- The file space is released after truncating.


The above mentioned six basic file operations constitute the minimal file operations set. These operations are primary ones and if combined can perform some other secondary file operations such as copying. A table known as the open file table is maintained by the operating system that stores all the information about the files that are currently open. When the file is closed, its entry is deleted from the open file table. Some files have to be opened explicitly with the function open() before using it. The name of the file is passed as an argument to this function. Then it looks in the directory for this file and an entry is made in the open file table.  Each file has some access rights. It is in these access modes that a process uses the file. A process can perform only those operations which are permitted by the access rights of the file. 


Saturday, June 15, 2013

What is Process State Diagram?

In the systems where multiple processors or multitasking is involved, a process has to go through a number of states. In this article we shall discuss about these states. 
The kernel of the operating system may not recognize these states distinctly but still for the understanding of how the processes are executed they act as useful abstractions. 
These various states can be looked up in a process state diagram for a systematic view. This diagram shows the transitions of the process between various states with arrows. Processes can be stored both in the secondary or virtual memory and in the main memory as per the situation.

Process States

- The primary process states occur in all types of the systems. 
- Processes in these states are usually stored in the main memory. 
Basically there are 5 major states of any process as discussed below:

Ø  Created: 
- It is also known as the ‘new’ state. 
- A process occupies this state up on it creation. 
- While waiting for being admitted to the next ready state this process lies in this state. 
- The admission scheduler will decide whether to admit the process to next state or to delay it on a short or long term. 
- However, this admission is approved in an automatic way in most of the desktop computers. 
- But in the systems with real time operating systems this is not true. 
- Here, the admission might be delayed by a certain amount of time.
- If too many states are admitted to the ready state in a real time operating system, condition of over contention and over saturation might occur disabling the system to meet its deadlines.

Ø  Ready or Waiting: 
- This state is taken up a process when it has been loaded in to the physical memory of the system and is waiting to be executed by the processor or precisely waiting to be context switched by the dispatcher. 
- At any instant of time there might be a number of processes waiting for their execution. 
- Here the processes have to wait in a queue called the run queue out of which only one process will be taken up by a processor. 
- Processes that are waiting for obtaining input from some event are not put in to this ready queue.

Ø  Running: 
- When a process is selected by the CPU for execution, its state is changed to running. 
- One of the processors executes the instructions of the process one by one. 
Only one process can be run by the process at a time.

Ø Blocked: 
- When a process is blocked because of some event such as I/O operations is put in to the blocked state. 
- Another reason for a process being a blocked state can be its running out of the CPU time allocated to it.

Ø  Terminated: 
- Termination of a process may occur when either its execution is complete or it has been explicitly killed.
- This state is called the terminated or halted. 
- This process is called a zombie process if after coming in the terminated state it is not removed from the main memory. 

There are two additional states for supporting the virtual memory. In these states the process is usually stored in the secondary memory of the system:

Ø Swapped out or Waiting: 
- A process is said to be swapped out when it is removed from the primary memory and placed in the secondary memory. 
- This is done by the mid - term scheduler. 
- After this the state of this process changes to waiting.  

Ø  Swapped out and Blocked:
- In some cases, the processes that were in blocked state might be swapped out. 
- The same process might be again swapped in providing the conditions remain the same.



What is CPU Scheduling Criteria?

Scheduling is an essential concept that serves in the multitasking, multiprocessor and distributed systems. There are several schedulers available for this purpose. But these schedulers also require a criterion up on which they can decide how to schedule the processes. In this article we discuss about these scheduling criteria. Today a number of scheduling algorithms are available and all these have different properties. This is why these may work up on different scheduling criteria. Also the chosen algorithm may favor one class of processes more than the other.

What Criteria is used by algorithms for Scheduling?


Below mentioned are some of the criteria used by these algorithms for scheduling:
1. CPU utilization:
- It is a property of a good system to keep the CPU as busy as possible all the time.
- Thus, this utilization ranges from 0 percent to 100 percent.
- However, in the systems that are loaded lightly, the range is around 40 percent and for the systems heavily loaded it ranges around 90 percent.

2. Throughput:
- The work is said to be done if the CPU is busy with the execution of the processes.
- Throughput is one measure of CPU performance and can be defined as the number of processes being executed completely in a certain unit of time.
- For example, in short transactions throughput might range around like 10 processes per second.
- In longer transactions this may range around only one process being executed in one hour.

3. Turnaround time:
- This is an important criterion from the point of view of a process.
- This tells how much time the processor has taken for execution of  a processor.
- The turnaround time can be defined as the time duration elapsed from the submission of the process till its completion.

4. Waiting time:
- The amount of time taken for the process for its completion is not affected by the CPU scheduling algorithms.
- Rather, these algorithms only affects the time when the process is in waiting state.
- The time for which the process waits is called the waiting time.

5. Response time:
- The turnaround is not a good criterion in all the situations.
- The response time is favorable in the case of the interactive systems.
- It happens many a times that a process is able to produce the output in a fairly short time compared to the expected time.
- This process then can continue with the next instructions.
- The time taken for a process from its submission till production of the first response is calculated as the response time and is another criterion for the CPU scheduling algorithms.

All these are the primary performance criteria out of which one or more can be selected by a typical CPU scheduler. These criteria might be ranked by the scheduler depending up on their importance. One common problem in the selection of performance criteria is the possibility of conflict ion between them.
For example, increasing the number of active processes will increase the CPU utilization but at the same time will decrease the response time. This is often desirable to produce reduction in waiting time and turnaround time also. In a number of cases the average measure is optimized. But there are certain cases also where it is more beneficial to optimize the maximum or the minimum values.
It is not necessary that a scheduling algorithm that maximizes the throughput will decrease the turnaround time. Out of a mix of short and long jobs, if a scheduler runs only the short jobs, it will produce the best throughput. But at the same time the turnaround time for the long jobs will be so high which is not desirable.


Facebook activity