Subscribe by Email


Showing posts with label Switching. Show all posts
Showing posts with label Switching. Show all posts

Wednesday, September 18, 2013

What are the advantages and disadvantages of datagram approach?

- Today’s packet switching networks make use of a basic transfer unit commonly known as the datagram. 
- In such packet switched networks, the order of the data packets arrival, time of arrival and delivery comes with no guarantee. 
- The first packet switching network to use the datagrams was CYCLADES. 
Datagrams are known by different names at different levels of the OSI model. 
- For example, at layer 1 we call it Chip, at layer 2 it is called Frame or cell, data packet at layer 3 and data segment at layer 4. 
- The major characteristic of a datagram is that it is independent i.e., it does not rely on any other thing for the information required for exchange.
- The duration of a connection between any two points is not fixed such as in telephone conversations. 
- Virtual circuits are just the opposite of the datagrams. 
- Thus, a datagram can be called as a self containing entity. 
- It consists of information sufficient for routing it from the source to the destination without depending up on the exchanges made earlier. 
- Often, a comparison is drawn between the mail delivery service and the datagram service. 
- The user’s work is to just provide the address of the destination. 
- But he/she is not guaranteed the delivery of the datagram and if the datagram is successfully delivered, no confirmation is sent to the user. 
- The data gram are routed to some destination without help of a predetermined path. 
- The order in which the data has to be sent or received is given no consideration. 
- It is because of this that the datagrams belonging to a single group might travel over different routes before they reach their common destination. 

Advantages of Datagram Approach
  1. Datagrams can contain the full destination address rather than using some number.
  2. There is no set up phase required for the datagram circuits. This means that no resources are consumed.
  3. If it happens during a transmission that one router goes down, the datagrams that will suffer will include only those routers which would have been queued up in that specific router. The other datagrams will not suffer.
  4. If any fault or loss occurs on a communication line, the datagrams circuits are capable of compensating for it.
  5. Datagrams play an important role in the balancing of the traffic in the subnet. This is so because halfway the router can be changed.
Disadvantages of Datagram Approach

  1. Since the datagrams consist of the full destination address, they generate more overhead and thus lead to wastage of the bandwidth. This in turn makes using datagram approach quite costly.
  2. A complicated procedure has to be followed for datagram circuits for determining the destination of the packet.
  3. In a subnet using the datagram approach, it is very difficult to keep congestion problems at bay.
  4. The any-to-any communication is one of the key disadvantages of the datagram subnets. This means that if a system can communicate with any device, any of the devices can communicate with this system. This can lead to various security issues.
  5. Datagram subnets are prone to losing or re - sequencing the data packets during the transition. This puts a great burden on the end systems for monitoring, recovering, and reordering the packets as they were originally.
  6. Datagram subnets have less capability of dealing with congestion control as well as flow control. This happens because the direction of the incoming traffic is not specified. In the virtual circuit subnets, the flow of the packets is directed only along the virtual circuits thus making it comparatively easy for controlling it.
  7. The unpredictable nature of the flow of the traffic makes it difficult to design the datagram networks


Wednesday, August 14, 2013

What is the idea behind link state routing?

There are two main classes of the routing protocols out of which one is the link state routing protocol on which we shall focus in this article. The other class of the routing protocols is the distance vector routing protocols.
In computer communications, the link state routing protocol is applied in the packet switching networks. 
We can give two main examples of the link state routing protocol namely:
  1. IS – IS i.e., intermediate system to intermediate system and
  2. OSPF i.e., the open shortest path first
Almost every switching node is capable of performing the link – state routing in the network. Switching nodes can be defined as the nodes that can forward the packets. We call these nodes as routers in the internet.

Idea behind the Link State Routing

 
- A map is constructed by every node concerning its connectivity with the network. 
- This map is actually in a graph’s form. 
- This graph shows all the connections that exist between the various nodes in the graph. 
- The next “best logical path” is then calculated independently by each node to every destination in the network based up on its possibility. 
- The routing table of the node will then be formed by this collection of the best paths.
This is in total contrast with the second class of the routing protocols.
- In the distance vector routing protocols, the routing table of a node is also shared by its neighbors whereas in the link state routing protocol, only the connectivity related information is passed between the nodes. 
- The simplest configuration of the link state routing protocols is the one that has no area. 
- This implies that each node possesses a map of the whole network. 
- The first main stage involves providing map of the network to each node. 

For doing the following, subsidiary steps are followed:
  1. Determination of the neighboring nodes: Each node determines to which all neighboring nodes it is connected to. Also, it needs to know whether over the links through which it is connected are fully working or not. A reach-ability protocol is used for accomplishing this task. This protocol is run regularly but in separation from the neighboring nodes.
  2. Distribution of the map information: The link state advertisement is the short message that is made by the node in case of some modification in the connectivity periodically.
- The above obtained set of such link state advertisements is used for the creation of the map of the entire network. 
- The second stage involves production of the routing tables through the map inspection. 
This again involves a number of steps:
  1. Calculation of the shortest paths: Shortest path from one node to other nodes is determined by running a shortest path routing algorithm over the entire map. The commonly used algorithm is the Dijkstra’s algorithm.
  2. Filling the routing table: The table is filled with best shortest paths obtained in the above step for every two nodes.  
  3. Optimizations: We gave a glance of the simple form of this algorithm but in practical applications this form is used along with a number of optimizations. Whenever a change is detected in the network connectivity, the shortest path tree has to be immediately recomputed and then the routing table must be recreated. A method was discovered by the BBN technologies for recomputing only the affected part of the tree.

- Routing loops can be formed if working of the nodes using the exactly same map is not proper. 
- For adhoc networks such as the mobile networks, the optimized form of the protocol i.e., the optimized link state routing protocol is used. 


Saturday, May 4, 2013

What is Context Switch?


- The context switch refers to the process that involves storing and restoring of the context or the state of the process. 
- This makes it possible to resume the execution of the process from that same saved point in the future. 
- This is very important as it has enabled the various processes for sharing one CPU and therefore it represents one of the essential features of an operating system that is capable of multi – tasking. 
- It is the operating system and the processors which decide what will constitute the context. 
- One of the major characteristic of the context switches is that they are computationally very intensive.
- Most of the designing of the operating systems is concerned with the optimization of the use of these switches. 
- A finite amount of time is required for switching from one process to another one. 
- This time is spent in the administration of the process which includes saving and loading of the memory maps, registers etc. plus the various lists and tables are updated. 
- A context switch may mean either of the following:
Ø  A register context switch
Ø  A task context switch
Ø  A thread context switch
Ø  A process context switch

Potential Triggers for a Context Switch

There are three potential triggers for a context switch. A switch can be triggered in any of the three conditions:

1. Multi-tasking: 
- It is common that one process has to be switched out of the processor so as to execute another process. 
- This is done by the use of some scheduling scheme. 
- Here, if the process makes itself un-executable, then it can trigger this context switch. 
- The process can do this by waiting for synchronization or an I/O operation to finish. 
- On a multitasking system that uses pre-emptive scheduling, the processes that are still executable might be switched out by the scheduler. 
- A timer interrupt is employed by some of the preemptive schedulers to avoid process starving of the CPU time.
- This interrupt gets triggered when the time slice is exceeded by the process. - Furthermore, this interrupt makes sure that the scheduler will be able to gain control for switching.

2. Interrupt handling: 
- Modern architectures are driven by the interrupts. 
- This implies that the CPU can issue the request while continuing with some other execution and without waiting for the current read/ write operation to get over. 
- When the currently executing operation is over, the interrupt fires and presents the result to the CPU. 
- Interrupt handler is used for handling the interrupts. 
- The interrupts are handled by this program directly from the disk. 
- A part of the context is automatically switched by the hardware up on the occurrence of an interrupt. 
- This context is enough for the handling program to go back to the code that raised the interrupt.
- The additional context might be saved by the handler as per the details of both the software and hardware designs. 
- Usually, only a required small context’s part is changed so as to keep the amount of time required for handling as minimum as possible. 
- Kernel does not take part in scheduling a process that would handle the interrupts.

3. User and kernel mode switching: 
- A context switch is not required for making a transition between the kernel mode and the user mode in an operating system.
- A mode transition in itself cannot be considered to be a context switch. 
- But, it depends on the OS whether or not the context switch will take place. 


Friday, May 3, 2013

What is a Dispatcher?


A number of types of schedulers are available that suit the different needs of different operating systems. Presently, there are three categories of the schedulers:
  1. Long-term schedulers
  2. Medium-term schedulers
  3. Short-term schedulers
Apart from the schedulers there is one more component involved in the scheduling process and is known as the dispatcher. 
- It is the dispatcher that gives a process power to control the CPU. 
- To which process this control is to be given is selected by the short-term scheduler. 
- This whole process involves the following three steps:
  1. Switching the context
  2. Turning on the user code
  3. Making a jump to the exact location of the program from where it has to be restarted.
- Values taken from the program counter are analyzed by the dispatcher and accordingly it fetches instructions and feeds data in to the registers. 
- The dispatcher unlike the other system components needs to be very quick since it is invoked during all the switches that occur. 
- Whenever a context switch is invoked, the processor gets in to an idle state for a very small period of time. 
- Hence, it is required that the context switches that are not necessary might be avoided. 
- The dispatcher takes some time for stopping one process and start running the other one. 
- The dispatch latency is what we call this time.

- Scheduling and dispatch are complex processes and interrelation to each other. 
- These two are very much essential for the operation of the operating system. 
Today, architectural extensions are available for the modern processors that provide a number of banks of registers.
- Swapping of these registers in hardware is possible and therefore a certain number of tasks are capable of retaining their full registers. 
- Whenever an interrupt triggers the dispatcher, it sends to it the full set of the registers belonging to the process that was being executed at the time of occurrence of the interrupt. 
- Here, the program counter is not considered. 
- Therefore, it is important that the dispatcher should be written carefully for storing the present states of the registers on being triggered. 
- In other words, we can say that for the dispatcher itself, there is no immediate context. 
- This saves it from the same problem. 

Process of Dispatcher

Below we try to describe in simple words what actually the process is.
  1. The program presently having the context is executed by the processor. Things used by this program include stack base, flags, program counter, and registers and so on. There is a possible exception of the reserved register that is native to the operating system. The executing program does not have any knowledge regarding the dispatcher.
  2. For dispatcher a timed interrupt is triggered. Here the program counter jumps to the address listed in the BIOS interrupt. This marks the beginning of the execution of the dispatch sub routine. The dispatcher then deals with the stacks and the registers etc. of the program that raised the interrupt.
  3. Dispatcher like the other programs consists of some sets of instructions that operate up on the register of the current program. These instructions know everything of the previously executed programs. Out of these, the first few instructions are responsible for storing the state of the program.
  4. Dispatcher next determines which program should be given the CPU next for executing. Now it deletes all the statistics of the previously executed state and fills in the details of the next process to be executed.
  5. Dispatcher jumps to the address mentioned in the program counter and establishes a full context on the processor.
- Actually dispatcher does not really require registers since its only work is to write the current state of the CPU in to a memory location that has been predetermined. 
- It then loads in to the CPU another process from other predetermined location. 


Thursday, March 28, 2013

What is the basic principle behind Dynamic synchronous transfer mode (DTM)?


- Dynamic synchronous transfer mode or DTM is one of the most interesting of all the networking technologies. 
- The basic objective behind implementing this technology is to achieve high speed networking along with the transmissions of top quality.
- It also possesses the ability of adapting the bandwidth in varying traffic conditions quickly. 
- DTM was designed with the purpose of being used in integrated service networks including both the one to one communication and distribution.
- Furthermore, it can be used in application to application communication. 
- Nowadays, it has also found its use as a carrier for IP protocols (i.e., high layer protocols). 
- DTM is a combination of 2 basic technologies namely packet switching and circuit switching. 
- It is because of this that the DTM has many advantages to offer. 
- It also comes with a number of services access solutions for the following fields:
Ø  City networks
Ø  Enterprises
Ø  Residential as well as other small offices
Ø  Content providers
Ø  Video production networks
Ø  Mobile network operators

Principles of Dynamic synchronous transfer mode (DTM)

 
- This mode has been designed to work up on a unidirectional medium. 
- This medium also supports multiple access i.e., all the connected nodes can share it. 
- It can be built up on various topologies such as:
  1. Ring
  2. Double ring
  3. Point – to – point
  4. Dual bus and so on.
- TDM or time division multiplexing is what up on which the DTM is based. 
- Here, a fiber link’s transmission capacity is broken down in to smaller units of time. 
- The total link capacity is broken down in to frames of fixed size of 125 microseconds. 
The frames are then further subjected to division in to time slots of 64 bit. 
- How many time slots will be there in one frame is determined by its bit rate. 
- These time slots consist of many separate control slots and data slots. 
- In some cases more control slots might be required, then the data slots can be turned in to control slots or vice versa.
- The nodes that are attached to the link possess the right to write both the kinds of slots. 
As a consequence of this, same time slot position will be occupied by the all the time slots within each frame. 
- Each node possesses the right to at least one slot which can be used by the node for transmitting control messages to the other nodes. 
- These messages can also be sent when requested by the user as a response to messages sent by the other nodes or for some purpose of network management.
- A small fraction of the whole capacity is constituted by the control slots, while a major part is taken by the data slots that carry payload. 
- With the number of control slots, the signaling overhead in DTM varies though it is usually very low.
- Whenever a communication channel is established, a portion of the available data slots is allocated to the channel by the node. 
- There has been an increasing demand of the network transfer capacity because of the globalization of the network traffic and integrated audio, video and data transmission. 
Optical fibers’ transmission capacity is increasing by great margins when compared to any other processing power. 
- DTM still holds the promise for providing full control to the network resources.


Monday, March 25, 2013

What is Dynamic synchronous transfer mode (DTM)?


Dynamic synchronous transfer mode or the DTM is a technology developed for optical networking. The ETSI (i.e., the European telecommunications standards institute) standardized this technology in the year of 2001 marked with the following beginning specification ‘ETSI ES 201 803 – 1’. 
This is a circuit switching network technology that doubles as a time division multiplexing technology too. Actually, this technology is built up on a combination of the switching and transport.
This technology guarantees to provide QoS or quality of service for services that are involved with the streaming of videos. 

However, it might be used for packet – based services also. It is marketed for the following:
  1. Professional media networks
  2. Mobile TV networks
  3. DTT or digital terrestrial television networks
  4. Content delivery networks
  5. Consumer oriented networks (for example, triple play)

What is Switching?

- Switching of the channels is specified by DTM. 
- This is what that makes it different from the other transmission techniques that we have, for example, SONET (synchronous optical networking), SDH (synchronous digital hierarchy) and so on. 
- End to end provisioning is done for the DTM channel over a network with general topology through the use of control signaling.
DTM therefore represents a circuit switched system. 
- The switches are nothing but time space switches that guarantee the QoS property. 
- The allocation of the resources is done physically for each channel in the switch. 
- This is quite contrary to the switches that are based up on packets or cells. 
- In those kind of switches there is always a competition for resources between the packets and cells. 
- Such a competition leads to delaying and discarding of the packets and cells. - Other methods offer a shared resource allocation mechanism that draws a limit for the packet and cell switches regarding their utilization of the network in such way that the QoS is maintained at a certain level. 
- But DTM does not follow this shared allocation mechanism rather it implies that a network can be loaded up to full limit theoretically and still can guarantee the QoS. 
Thus, here real utilization is more like a question of adaptation of the network topology as well as its link capacities considering the actual traffic matrix.

- Packet and cell based switching technologies are more suited to statistical multiplexing.
- It means whenever a packet streams in a router come at an outgoing link that is common to all of them, buffering is carried out until the resources are free on that particular link.
- In this way, it becomes possible to make use of the outgoing link to the maximum degree possible without causing many delays. 
- This also proves fitting for the best effort traffic. 
- But there are certain QoS requirements of the streaming media that cannot be ignored. 
- Streaming traffic is by nature not statistical and therefore is better maintained by end to end resource allocation.

- This category is applicable for audio and video services.
- This is not exclusive of the IP traffic gained via guaranteed QoS transport if majority of the content is audio and video. 
- Some other technologies such as that of IP and Ethernet were also adopted for the same purpose. 
- Multi protocol label switching or MPLS can be applied to the carriage network for improving the reliability as well as determinism that is required by most of the streaming media. 
- This technology is applied along with the techniques such as the forward error correction.
- Ethernet has been made supportive for audio and video transmission by improvement in technologies such as the provider backbone bridge traffic engineering. 
- The development of dynamic synchronous transfer mode took place at the royal institute of technology. 


Facebook activity