Subscribe by Email


Showing posts with label Tasks. Show all posts
Showing posts with label Tasks. Show all posts

Wednesday, September 11, 2013

What are multi-protocol routers?

- There are routers that have the capability to route a number of protocols at the same time. 
- These routers are popularly known as the multi-protocol routers. 
- There are situations in networking where combinations of various protocols such as the appletalk, IP, IPX etc. are used. 
- In such situations normal typical router cannot help. This is where we use the multi-protocol routers. 
- Using the multi-protocol routers, information can be shared between the networks. 
- The multi-protocol router maintains an individual routing table for each of the protocols.
- The multi-protocol routers have to be used carefully since they cause an increase in the number of routing tables that are present on the network. 
- Each protocol is advertised individually by the router. 

A multiprotocol router consists of the following information:
Ø  Routing information protocol (RIP)
Ø  Boot protocol relay agent (BOOTP)
Ø  RIP for IPX
- The multi-protocol routers use this routing information protocol for performing dynamic exchange of the routing info. 
- Routers using RIP protocol can dynamically exchange information with the other routers that use the same protocol. 
- The BOOTP agent is included so that the DHCP requests can be forwarded to their respective servers residing on other subnets. 
- It is because of this, a single DHCP server can process a number of IP subnets. 
- Multi-protocol routers do not require to be manually configured.
- The networking world these days relies totally up on the internet protocol. But there are certain situations where certain tasks can be performed more efficiently by the other protocols. 
- Most of the network protocols share many similarities rather than being different. 
- Therefore, if one protocol can be routed by a protocol efficient, then it is obvious that it can route the other one also efficiently. 
- If we route the non-IP protocols in a network, this implies that the same staff that takes care of the IP monitoring is administering the non-IP routing also. 
This reduces the need for more equipment and effort. 
- There are a number of non-IP protocols available using which a LAN can work more effectively. 
- Using a number of non-IP protocols, a network can be made very flexible and easier to meet the demands of its users. 
- All these points speak in the favor of multi-protocol routing in an abstract way. 
- But the non-IP protocols to be routed must be selected with care. 

Below we mention reasons why routing non – IP protocols can be avoided:

  1. It requires additional knowledge because you cannot master everything. For individual protocol an expert is required who in case of a failure can diagnose it and fix it.
  2. It puts extra load on the routers. For every protocol, the router would have to maintain a separate routing table. This calls for a dynamic routing protocol for the router itself. For all this, more memory is required along with high processing power.
  3. It increases the complexity. Multi-protocol router even though it seems to be simple, it is quite a complicated thing in terms of both hardware and software. Any problem in the implementation of the protocol can have a negative impact up on the stability of all the protocols.
  4. Difficulty in designing: There are separate rules for routing of each protocol, assignment of the addresses and so on. There are possibilities that there might be conflicts between these rules which means it is very difficult to design.
  5. It decreases stability. Scaling capacity of certain protocols is not as good as of the others. Some of the protocols are not suited to work in a WAN environment. 


Wednesday, August 14, 2013

What is the idea behind link state routing?

There are two main classes of the routing protocols out of which one is the link state routing protocol on which we shall focus in this article. The other class of the routing protocols is the distance vector routing protocols.
In computer communications, the link state routing protocol is applied in the packet switching networks. 
We can give two main examples of the link state routing protocol namely:
  1. IS – IS i.e., intermediate system to intermediate system and
  2. OSPF i.e., the open shortest path first
Almost every switching node is capable of performing the link – state routing in the network. Switching nodes can be defined as the nodes that can forward the packets. We call these nodes as routers in the internet.

Idea behind the Link State Routing

 
- A map is constructed by every node concerning its connectivity with the network. 
- This map is actually in a graph’s form. 
- This graph shows all the connections that exist between the various nodes in the graph. 
- The next “best logical path” is then calculated independently by each node to every destination in the network based up on its possibility. 
- The routing table of the node will then be formed by this collection of the best paths.
This is in total contrast with the second class of the routing protocols.
- In the distance vector routing protocols, the routing table of a node is also shared by its neighbors whereas in the link state routing protocol, only the connectivity related information is passed between the nodes. 
- The simplest configuration of the link state routing protocols is the one that has no area. 
- This implies that each node possesses a map of the whole network. 
- The first main stage involves providing map of the network to each node. 

For doing the following, subsidiary steps are followed:
  1. Determination of the neighboring nodes: Each node determines to which all neighboring nodes it is connected to. Also, it needs to know whether over the links through which it is connected are fully working or not. A reach-ability protocol is used for accomplishing this task. This protocol is run regularly but in separation from the neighboring nodes.
  2. Distribution of the map information: The link state advertisement is the short message that is made by the node in case of some modification in the connectivity periodically.
- The above obtained set of such link state advertisements is used for the creation of the map of the entire network. 
- The second stage involves production of the routing tables through the map inspection. 
This again involves a number of steps:
  1. Calculation of the shortest paths: Shortest path from one node to other nodes is determined by running a shortest path routing algorithm over the entire map. The commonly used algorithm is the Dijkstra’s algorithm.
  2. Filling the routing table: The table is filled with best shortest paths obtained in the above step for every two nodes.  
  3. Optimizations: We gave a glance of the simple form of this algorithm but in practical applications this form is used along with a number of optimizations. Whenever a change is detected in the network connectivity, the shortest path tree has to be immediately recomputed and then the routing table must be recreated. A method was discovered by the BBN technologies for recomputing only the affected part of the tree.

- Routing loops can be formed if working of the nodes using the exactly same map is not proper. 
- For adhoc networks such as the mobile networks, the optimized form of the protocol i.e., the optimized link state routing protocol is used. 


Wednesday, July 17, 2013

What are network layer design issues?

- The network layer i.e., the third layer of the OSI model is responsible for facilitating the exchange of the individual information or data pieces between hosts over the network. 
- This exchange only takes place between the end devices that are identified. 
For accomplishing this task, 4 processes are used by the network layer and these are:
Ø  Addressing
Ø  Encapsulation
Ø  Routing
Ø  Decapsulation
In this article we focus up on the design issues of the network layer. 

- For accomplishing this task, the network layer also need s to have knowledge about the communication subnet’s topology and select the appropriate routes through it. 
- Another thing that the network layer needs to take care of is to select only those routers that do not overload the other routers and the communication lines while leaving the other lines and router in an idle state.

Below mentioned are some of the major issues with the network layer design:
  1. Services provided to the layer 4 i.e., the transport layer.
  2. Implementation of the services that are connection oriented.
  3. Store – and  - forward packet switching
  4. Implementation of the services that are not connection oriented.
  5. Comparison of the data-gram sub-nets and the virtual circuits.
- The sender host sends the packet to the router that is nearest to it either over a point-to-point carrier link or LAN. 
- The packet is stored until its complete arrival for the verification of the check sum. 
- Once verified, the packet is then transmitted to the next intermediate router. 
- This process continues till the packet has reached its destination. 
- This mechanism is termed as the store and forward packet switching.

The services that are provided to the transport layer are designed based up on the following goals:
  1. They should be independent of the router technology.
  2. Shielding from the type, number and topology of the routers must be provided to the transport layer.
  3. The network addresses that are provided to the transport layer must exhibit a uniform numbering plan irrespective of whether it’s a LAN or a WAN.
Now based up on the type of services that are offered, there is a possibility for two different organizations.

Offered service is Connection-less: 
- The packets are individually introduced in to the sub-net and the routing of the packets is done independently of each other. 
- It does not require any advance set up. 
- The sub-net is referred to as the data gram sub-net and the packets are called data-grams.

Offered service is connection-oriented: 
- In this case the router between the source and the destination must be established prior to the beginning of the transmission of the packets. 
- Here, the connection is termed as the virtual circuit and subnet as the “virtual circuit subnet” or simply VC subnet.

- Choosing a new router every time is a thing to be avoided and this is the basic idea behind the use of the virtual circuits. 
- Whenever we establish a connection, a route has to be selected from source to destination. 
- This is counted as a part of the connection setup only. 
- This route is saved in the routers tables that are managed by the routers and is then used by the flowing traffic. 
- On the release of connection, the VC is automatically terminated. 
- In case of the connection oriented service, an identifier is contained in each packet which tells the virtual circuit to which it belongs.

- In data-gram sub-net circuit setup is not required whereas it is required in the VC circuit. 
- The state info is not held by the routers in the data gram subnet whereas router table space is required for each VC for each connection. 


Saturday, June 29, 2013

What are the reasons for using layered protocols?

Layered protocols are typically used in the field of networking technology. There are two main reasons for using the layered protocols and these are:
  1. Specialization and
  2. Abstraction
- A neutral standard is created by a protocol which can be used by the rival companies for creating programs that are compatible. 
- So many protocols are required in the field and that should also be organized properly and these protocols have to be directed to the specialists that can work up on these protocols. 
- A network program can be created using the layered protocols by a software house if the guidelines of one layer are known. 
- The services of the lower level protocols can be provided by the companies. 
This helps them to specialize. 
- In abstraction, it is assumed that another protocol will provide the lower services. 
- A conceptual framework is provided by the layered protocol architecture that divides the complex task of information exchange into much simpler tasks between the hosts. 
- The responsibility for each of the protocols is narrowly defined. 
- A protocol provides an interface for the successive higher layer protocol. 
- As a result of this, it goes in to hiding the details of the higher protocol layers that underlies. 
- The advantage of using the layered protocols is that the same application i.e., the user level program can be used by a number of diverse communication networks.
- For example, when you are connected to a dial up line or internet via LAN you can use the same browser. 
- For simplifying the networking designs, one of the most common techniques used is the protocol layering. 
- The networking designs are divided in to various functional layers and the protocols are assigned for carrying out the tasks of each layer. 
- It is quite common to keep the functions of the data delivery separate from each other and separate layers for the connection management too.  
Therefore, we have one protocol for performing the data delivery tasks and second one for performing connection management. 
- The second one is layered up on the first one. 
- Since the connection management protocol is not concerned with the data delivery, it is also quite simple. 
- The OSI seven layer model and the DoD model are one of the most important layered protocols ever designed. 
- A fusion of both the models is represented by the modern internet. 
- Simple protocols are produced by the protocol layering with some well defined tasks. 
- These protocols then can be put together to be used as a new whole protocol. - As required for some particular applications, the individual protocols can be either replaced or removed. 
- Networking is such a field involving programmers, electricians, mathematicians, designers, electricians and so on. 
- People from these various fields have very less in common and it is because of the layering that people with such varying skills to make an assumption or feel like others are carrying out their duty. 
- This is what we call abstraction. 
- Protocols at a level can be followed by an application programmer via abstraction assuming that network exists and similarly electricians assume and do their work. 
- One layer can provide services to the succeeding layer and can get services in return too. 
- Abstraction is thus the fundamental foundation for layering. 
- Stack has been used for representing the networking protocols since the start of network engineering. 
- Without stack, it would be unmanageable as well as overwhelming. 
Representing the layers of specialization for the first protocols derived from TCP/ IP.



Sunday, May 19, 2013

What are different types of schedulers and their workings?


Scheduling is an important part of the working of operating systems. 
- The scheduler is the component that provides access to the resources to the processes, threads and data flows. 
- These resources may include time of the processor and the communications bandwidth. 
- Scheduling is necessary for effectively balancing the load of the system and achieving the target of QoS or quality of service. 
- Scheduling is also necessary for the systems that do multitasking and multiplexing on a single processor since they need to divide the CPU time between many processes. 
- In multiplexing, it is required for timing the simultaneous transmission of the multiple flows.

Important things about Scheduler

There are 3 things which most concern the scheduler:
  1. Throughput
  2. Latency inclusive of the response time and the turnaround time
  3. Waiting time or the fairness time
- But when practically implemented, conflicts arise between these goals for example between latency and throughput. 
- It is the scheduler that can make a compromise between any two goals. 
Based on the user’s requirements and the objectives it is decided to which goal the preference has to be given. 
- In systems such as the embedded systems and robotics that operate in real time environment, it has to be ensured by the scheduler that the processes are capable of meeting the deadlines. 
- This is a very critical factor in maintaining the stability of the system. 
- The administrative back end is used for managing the scheduled tasks that are then sent to the mobile devices.  

Types of Schedulers

There are 3 different types of schedulers available which we discuss below:

Long term Schedulers or Admission Schedulers: 
- The purpose of this type of scheduler is to decide about the processes and jobs to be admitted or added to the ready queue. 
- When a program makes an attempt for executing a process, it is the responsibility of the long – term scheduler to delay or authorize the request for admitting the process to the ready queue. 
- Thus, what all processes will be executed by the system is dictated by this scheduler. 
- It also dictates about the degree of the concurrency and handling of the CPU intensive and I/O intensive processes. 
- Modern operating systems use this for making sure that there is enough time for the processes to finish of their tasks. 
- Modern GUIs would be of very less use if there was no real time scheduling. 
The long term queue resides in the secondary memory.

Medium term Schedulers: 
- This scheduler serves the purpose of removing the processes from the physical memory and placing them in the virtual memory and even vice versa. 
This process is called swapping out and swapping in. 
- A process that has been inactive for some time might be swapped by the scheduler. 
- It may also swap a process with frequent page faulting, low priority or more amount of memory etc. 
- This is necessary since this makes the space available for other processes.

Short term Schedulers: 
- These schedulers are more commonly known as the CPU schedulers.
- It decides which one out of all the processes will be executed after the clock interrupt, a system call, an I/O interrupt, hardware interrupt and so on. 
- Thus, we can say that the frequency of the short term schedulers of making decisions is much higher than that of the long term and medium term schedulers since after every time slice these schedulers have to decide.
There is one more component that is involved in CPU scheduling but is not counted under schedulers. It is called dispatcher. 


Friday, May 17, 2013

Define a process? What are sequential and concurrent processes?


- Each and every task which we order our computer to carry out is accomplished by a set of processes. 
- It is these processes that actually run the program. 
- A process can be defined as an instance of the program that is currently being executed. 
- A program’s current activity and the code being executed are stored in the process itself. 
- However, it depends on the operating system that the process is to constitute of multiple threads for a concurrent execution or just one thread for sequential execution. 

This gives rise to two different types of processes namely:

Sequential processes 
Sequential processes can be executed on the same processor but the concurrent processors however may sometimes require more than one processor.

Concurrent processes
Concurrent processes are executed in parallel to each other and at the same whereas the sequential processes go step by step executing one instruction at a time. 

Concepts of Process

- A computer program can be defined as a set of passive instructions. When these instructions are actually executed, they form a process. 
- The same program may have a number of processes associated with it. 
Multiple processes can be executed by sharing the processors and the resources. 
- If this is done, it is called multitasking. 
- Each processor takes up a single task. 
- With multitasking, switching between the different tasks becomes possible for the processor and so the processes won’t have to wait for long. 
- However, it depends entirely on the operating system when the switch has to be performed:
  1. When the task is performing I/O operations or
  2. When the task itself indicates that it can now be switched or
  3. On hardware interrupts.
- Time sharing is a common type of multitasking and allows interactive user applications to response quickly. 
- In systems using the time sharing systems, switching is done quite rapidly. 
This gives an illusion of the simultaneous execution of the multiple processes by the same processor. 
- Such type of execution is termed as concurrency.
- Direct communication that may take place between independent processes is avoided by many of the modern operating systems for the reasons of maintaining reliability and security. 
- The inter-process communication functionality is kept under strict control and mediation. 
- In general, the following resources are said to constitute a process:
Ø  Executable machine code’s image associated with the task.
Ø  Some part of virtual memory that is inclusive of the process specific data, executable code, heap and a call stack. Heap is for holding the immediate data generated during the execution and the call stack is for keeping a track of the subroutines.
Ø OS descriptors belonging to the resources that were allocated to the processes. These descriptors may be data sources, sinks and file descriptors etc.
Ø  Security attributes including the set of permissions for the process and the owner of the process.
Ø  Processor state like physical memory addressing or register contents etc. registers store the state during the execution of the process or otherwise it is stored in memory.

- Most of this information is held in the process control blocks regarding the active processes. 
- The operating system makes it a point to maintain a separation between its resources and allocate them the requested resources so that they won’t interfere with the working of each other and thus won’t cause any system failures such as thrashing or deadlocks. 
- But processes do require communicating with each other. 
- For such cases to make interaction safe, operating system has mechanisms especially for the inter-process communication.




Wednesday, May 15, 2013

What is the Process Control Block? What are its fields?


The task controlling block, switch frame or task struct are the names of one and the same thing that we commonly called as the PCB or the process control block. 
This data structure belongs to the kernel of the operating system and consists of the information that is required for managing a specific process. 
- The process control block is responsible for manifesting the processes in the operating system. 
- The operating system needs to be regularly informed about resources’ and processes’ statuses since managing the resources of the computer system for the processes is a part of its purpose. 
- The common approach to this issue is the creation and updating of the status table for every process and resource and objects which are relevant such as the files, I/O devices and so on:
1.  Memory tables are one such example as they consist of information regarding how the main memory and the virtual or the secondary memory has been allocated to each of the processes. It may also contain the authorization attributes given to each process for accessing the shared memory areas.
2.   I/O tables are another such example of the tables. The entries in these tables state about the availability of the device required for the process or of what has been assigned to the process. the status of the I/O operations taking place is also mentioned here along with address of the memory buffers they are using.
3.   Then we have the file tables that contain the information regarding the status of the files and their locations in memory.
4. Lastly, we have the process tables for storing the data that the operating systems require for the management of the processes. The main memory contains at least a part of the process control block even though its configuration and location keeps on varying with the operating system and the techniques it uses for memory management.
- Physical manifestation of a process consists of program data areas both dynamic and static, instructions, task management info etc. and this is what that actually forms the process control block. 
- PCB has got a central role to play in process management. 
- Operating system utilities access and modify it such as memory utilities, performance monitoring utilities, resource access utilities and scheduling utilities etc. 
- The current state of the operating system is defined by the set of process control blocks. 
- It is in the terms of PCBs that the data structuring is carried out. 
- In today’s sophisticated operating systems that are capable of multi-tasking, many different types of data items are stored in process control block. 
- These are the data items that are necessary for efficient and proper process management. 
- Even though the details of the PCBs depend up on the system, the common parts can still be identified and classified in to the following three classes:
1.  Process identification data: This includes the unique identifier of the process that is usually a number. In multi-tasking systems it may consists of user group identifier, parent process identifier, user identifier and so on. These IDs are very much important since they let the OS cross check with the tables.
2.   Process state data: This information defines the process status when it is not being executed. This makes it easy for the operating system to resume the process from appropriate point later. Therefore, this data consists of CPU process status word, CPU general purpose registers, stack pointer, frame pointers and so on.
3.   Process control data: This includes process scheduling state, priority value and amount of time elapsed since its suspension. 


Tuesday, May 14, 2013

What is a Distributed System?


In the field of computer science, the distributed computing constitutes of distributed systems. 
- Multiple computers that are capable of communicating via a computer network together compose a distributed system. 
- All the computers in a distributed system work together in order to accomplish a common task.
- A common program is also required for running this whole system and is known as the distributed system. 
- Such programs for distributed systems are written using the process called the distributed programming. 
- Distributed computing involves the use of distributed systems for solving the computational problems.
- A distributed system divides the problem into much smaller tasks that are then given to one or more computers of the distributed systems. 
- These systems use message passing for communicating with each other. 
- The term distributed system earlier referred to the networks which had their hosts distributed over a geographical area. 
- This term was eventually refined and now is applied to a much broader concept. 
- It now also refers to the various autonomous processes that execute on the same system but maintain an interaction with other systems also through message passing. 
Because of the wide sense to which the concept is applied, it has no formal definition; rather the following properties are used for defining it:
  1. There are many computational entities of the distributed system that are autonomous in nature and each of them possesses individual local memory. These entities are commonly referred to as the nodes.
  2. By means of message passing these entities communicate with each other.
- A distributed system works towards a common goal which may involve solving a big computing problem. 
- On the other side, each node in a distributed system may have its own requirements. 
- The distributed system must provide communication means to the user and help in coordinating the use of the common resources.

Properties of Distributed Systems

Distributed systems possess many other typical properties as mentioned below:
  1. It has the capability to tolerate the failures of the individual nodes or the computers.
  2. The system’s structure cannot be determined in advance. It includes a number of factors such as number of computers, network topology, and network latency and so on. The computers in the system might be of many different types and so the links also. As a result the structure of a distributed system may alter while executing a distributed program.
  3. The complete view of the distributed system is hidden from its nodes. They are provided only with a limited view or information about the system. Only a part of the input is known by each of the nodes.
- There are two terms which consistently overlap with the distributed computing namely parallel computing and concurrent computing
- The distinctions between these three are not clear at all. 
- At the same time a system may be called both a parallel one and a distributed one.
- Another thing about distributed systems is that the processors involved run in concurrence with each other but in parallel.
- Distributed computing in a more tightly coupled form is called parallel computing. 
- Thus, a loosely coupled form of parallel computing is the distributed computing. 

Two main reasons have been observed for using distributed computing:
  1. Depending on the nature of the application it may require using a network connecting many other systems. For example, data produced by one system is required by others.
  2. There are cases, where by theory and principle use of a single computer is possible but for the same case if a distributed system is used in practical then it might be more beneficial. For example, using a cluster of low – end computers for attaining the desired level of performance might be more cost efficient. 


Facebook activity