Subscribe by Email


Showing posts with label Hardware. Show all posts
Showing posts with label Hardware. Show all posts

Tuesday, October 1, 2013

How can firewalls secure a network?

Firewalls in computer systems are either software based or hardware based. But they have the same purpose of keeping a control over the incoming as well as the outgoing traffic. 
In this article we discuss about how the network is secured by the firewalls. 
This control is maintained through the analyzation of the data packets. 
- After analyzation, the firewall’s work is to determine whether to allow these packets to pass or not. 
- This decision is taken based up on some set of rules.
- With this set of rules, a barrier is established by the firewall between the external network that is not considered as secure and trusted and the internal network which is secure and trusted. 
- Most of the personal computer’s operating systems come with a built-in software based firewall for providing protection against the threats from external networks. 
- Some firewall components might also be installed in the intermediate routers in the network. 
- Also some firewalls have been designed to perform routing as well.

There are different types of firewalls which function differently.This classification of the firewalls is based up on the place where the communication is taking place i.e., whether at the network layer or the application layer.

Packet filters or network layer: 
- Firewalls used at the network layer are often termed as the packet filters. 
This firewall operates at low level of the protocol stack of the TCP/ IP and so does not allow the packets to pass through it unless they satisfy all the rules. 
These rules might be defined by the administrator of the firewall. 
- These firewalls can also be classified in to two categories namely the stateless firewalls and the state-ful firewall
- The former kind use less memory and operates faster in the simple filters, thus taking less time for filtering. 
- These firewalls are used for filtering the stateless network protocols i.e., the protocols which do not follow the session concept. 
- These firewalls are not capable of making complex decisions based up on the state of the communication. 
- The latter kind maintains the context of the active sessions. 
- This state info is used by these firewalls for speeding up the packet processing. 
- A connection is described using any of the properties such as the UDP or TCP ports, IP addresses and so on. 
- If a match is found between an existing connection and the packet, it is allowed to pass. 
- Today firewalls have capabilities of filtering the packets based up on attributes like IP addresses of source and destination hosts, protocols, originator’s netblock, TTL values and so on.

Application layer Firewalls: 
- Firewalls of this type work on the TCP/ IP stack’s application level. 
- All the packets traveling in and out of the application are intercepted by this firewall. 
- This leads to blocking of the other packets also. 
- Firstly, all the packets are inspected for any malicious content for preventing the outspread of the Trojans and worms. 
- Some additional inspection criteria might be used for adding some extra latency to the packet forwarding. 
- This firewall determines whether a given connection should be accepted by a process. 
- This function is established by the firewalls by hooking themselves in to the socket calls for filtering the connections. 
- These application layer firewalls are then termed as the socket filters.
- There way of working is somewhat similar to the packet filters except that the rules are applied to every process rather than connections. 
- Also, the rules are defined using the prompts for those processes that have not been provided with a connection. 
- These firewalls are implemented in combination with the packet filters.




Sunday, September 15, 2013

What is inter-network routing?

In this article we shall discuss about inter-network routing. Before moving to that there are certain terms with which you should be familiar:
Ø  End systems: The ISO (the international standards of organization) defines the end systems as the network elements that do not have the ability of forwarding the packets across the networks. Sometimes the term host is used to refer to the end systems.
Ø  Intermediate systems: These are the network elements that have the ability of forwarding the packets across the network. Most common examples are routers, switches, bridges and so on.
Ø  Network: It can be defined as a part of the inter-network infrastructure encompassing various elements including hubs, repeaters, bridges and so on. The networks are bounded by the intermediate systems.
Ø Router: This is one of the intermediate systems that is used for connecting various networks with each other. It might support one protocol (router) or many protocols (multi-protocol router). Its hardware part is optimized especially for performing routing. The software part is responsible for carrying out the routing and takes care of the routing tables.
Apart from these devices, there are 3 types of addresses involved in inter-network routing:
Ø  The inter-network address: The host address and the network address are combined together to form this address. This is used for unique identification of a host over the inter-network.
Ø The host address or host ID: This ID might be assigned by the administrator or might be simply the physical address of the host. It is used for the unique identification of the host on its network.
Ø  The network address or network ID: This is address of a network for identifying it in an inter-network.

All the data packets consist of a network layer header. This network layer header consists of the following when the packet is transmitted from one host to another:
ØThe address of the source inter-network: This address combines the address of the source host and the source network.
ØThe address of the destination inter-network: This address combines the address of the destination host and the destination network.
ØThe hop count: This usually begins at zero and is numerically incremented when the packet crosses a router. Or in the opposite case it might be assigned some maximum value which might be decremented on reaching a router. The purpose of using the hop count is to make sure that the packet does not keeps on circulating endlessly in the network.


- For inter-network routing, two things have to be known.
- Firstly, how do you reach other routers which lie in the same network and secondly, how do you reach other routers which lie in other networks? 
- The answer to the first question is easy as it is the common routing problem among two hosts residing over the same network. 
- This routing is handled by the interior gateway protocol and it is different for different networks since only local routing info is required. 
- In this case, the commonly used protocol is the open shortest path first or OSPF protocol. 
- The routing between two different networks is performed using the exterior gateway protocol. 
- This is actually the problem of inter-network routing. 
- Here, the commonly used protocol is the BGP or the border gateway protocol. 
- The graph for inter-network routing is quite different from the one that is used in the network routing. 
- This is so because the routers which lie in the same network can be thought of as being directly connected to one another for routing across inter-network. - All the networks in an inter-network function as though they are one large unit. 


Saturday, September 7, 2013

Explain the concept of inter-networking?

- The practice in which one computer network is connected with the other networks is called inter-networking. 
- The networks are connected with the help of gateways. 
- These gateways are used since they offer a common method for routing the data packets across the networks.
- The resulting system in which a number of networks are connected is called the inter-network or more commonly as the internet. 
- The terms “inter” and “networking” combine together to form the term “internet working”.  
- Internet is the best and the most popular example of the inter networking. 
Internet has formed as a result of many networks connected with the help of numerous technologies. 
- Many types of hardware technologies underlie the internet. 
- The internet protocol suite (IP suite) is the inter networking protocol standard responsible for unifying the diverse networks. 
- This protocol is more commonly known as the TCP/ IP. 
- Two computer local area networks (LANs) connected to one another by means of a router form the smallest internet but not the inter network. 
Inter networking is not formed by simply connecting two LANs together via a hub or a switch. 
- This is called expansion of the original local area network. 
Inter networking was started as a means for connecting the disparate networking technologies. 
- Eventually, it gained widespread popularity because of the development needs of connecting many local area networks together through some kind of WAN (wide area network). 
- “Catenet” was the original term that was used for the inter network. 
Inter network includes many types of other networks such as the PAN or personal area network. 
- Gateways were the network elements that were originally used for connecting various networks in predecessor of the internet called the ARPANET. 
Today, these connecting devices are more commonly known as the internet routers. 
- There is a type of interconnection between the various networks at the link layer of the networking model. 
- This layer is particularly known as the hardware centric layer and it lies below the TCP/ IP logical interfaces level. 

Two devices are mainly used in establishing this interconnection:
Ø  Network switches and
Ø  Network bridges
- Even now this cannot be called as inter networking rather, the system is just a single and large sub-network. 
- Further, for traversing these devices no inter networking protocol is required. 
However, it is possible to convert a single network in to an inter network. 
- This can be done by making various segments out of the network and also making logical divisions of the segment traffic using the routers. 
- The internet protocol suite has been particularly designed for providing a packet service. 
- This packet service offered by the IPS is quite unreliable. 
- The elements that maintain a network state and are intermediate in the network are avoided by the architecture. 
- The focus of the architecture is more on the end points of the active communication session.
- For a reliable transfer of the data, a proper transport layer protocol must be used by the applications. 
- One such protocol is the TCP (transmission control protocol) and it is capable of providing a reliable stream for communication. 
- Sometimes a simpler protocol such as the UDP (user datagram protocol) might be used by the applications. 
- The applications using this protocol carry out only those tasks for which reliable data delivery is not required or for which realtime is required. 

Examples of such tasks include voice chat or watching a video online etc. Inter networking uses two architectural models namely:

  1. OSI or the open system interconnection model: This model comes with 7 layer architecture that covers the hardware and the software interface.
  2. TCP/ IP model: The architecture of this model is somewhat loosely defined when compared with the OSI model. 


Monday, June 24, 2013

Explain the page replacement algorithms - FIFO, LRU, and Optimal

- Paging is used by most of the computer operating systems for the purpose of virtual memory management. 
- Whenever a page fault occurs, some pages are swapped in and swapped out. Who decides which pages are to be replaced and how? 
- This purpose is served by the page replacement algorithms. 
- Page replacement algorithms only decide which pages are to be page out or written to the disk when a page of the memory has to be allocated. 
- Paging takes place only upon the occurrence of a page fault.
- In such situations a free cannot suffice because either one is not available or because the number of available pages is less than threshold. 
- If a previously paged out page is reference again, then it has to be read in from the disk again. 
- But for doing this, the operating system has to wait for the completion of the input/ output operation. 
- The quality of a page replacement algorithm is denoted by the time it takes for a page in. 
- The lesser it is, the better the algorithm is. 
- The information about the access to page as provided by the hardware is studied by the page replacement algorithm and then it decides which pages should be replaced so that the number of page faults can be minimized. 

In this article we shall see about some of the page replacement algorithms.

FIFO (first – in, first – out): 
- This one being the simplest of all the page replacement algorithms has the lowest overhead and works by book – keeping in place of the OS. 
- All pages are stored in a queue in the memory by the operating system.
- The ones that have recently arrived are kept at the back while the old ones stand at the front end of the queue.
- While making replacement, the oldest page is selected and replaced. 
- Even though this replacement algorithm is cheap as well as intuitive, practically it does not perform well. 
- Therefore, it is rarely used in its original form. 
- VAX/ VMS operating systems make use of the FIFO replacement algorithm after making some modifications. 
- If a limited number of entries are skipped you get a partial second chance. 

Least recently used (LRU): 
- This replacement algorithm bears resemblance in name with the NRU. 
However, difference is there which is that this algorithm follows that the page usage is tracked for a certain period of time. 
- It is based on the idea that the pages being used many times in current set of instructions will be used heavily in the next set of the instructions also. 
- Near optimal performance is provided by LRU algorithm in the theory however in practical it is quite expensive to be implemented. 
- For reducing the costs of the implementation of this algorithm, few implementation methods have been devised. 
- Out of these the linked list method proves to be the costliest. 
- This algorithm is so expensive because it involves moving the items about the memory which is indeed a very time consuming task. 
- There is another method that requires hardware support.

Optimal page replacement algorithm: 
- This is also known as the clairvoyant replacement algorithm. 
- The page that has to be swapped in, it is placed in place of the page which according to the operating system will be used in future after a very long time. - In practical, this algorithm is impossible to be implemented in case of the general purpose OS because approximate time when a page will be used is difficult to be predicted. 


Thursday, June 20, 2013

Explain the single and multiple partition techniques?

There are a number of allocation techniques available and all have different properties and allocate memory based on different principles. One prominent type of allocation is the partitioned allocation. 
- In partitioned allocation the primary or the main memory of the system is divided into a number of contiguous memory blocks which are commonly known as the memory partitions. 
- Each of these partitions consists of all the information that might be required for carrying out a specific task. 
- The task of allocating these memory partitions to various jobs and processes and de-allocating them after use is the duty of the memory management unit.  
But partitioned allocation cannot be carried out by the help of software alone. 
It requires some hardware support. 
- This support prevents interference of the various jobs in to each other and with the operating system as well. 
- For example, a lock and key technique was used by the IBM system/ 360. 
- Some other systems made use of the registers called the base and bound registers containing the partition limits and these were also used for flagging if any invalid access was made. 
- Limits register was used by the UNIVAC 1108 having separate base and bound data and instructions. 
- A technique called the memory interleaving was used by this system for placing so called I banks and d banks in different memory modules. 

Partitions are of two types namely:
Ø  Static partitions: These are defined at the boot time or IPL (initial program load) or sometimes by the computer operator. An example of system using static partitions is IBM system/360 operating system multi-programming with MFT (fixed number of tasks).
Ø  Dynamic partitions: These are created automatically for the specified job. An example is of the IBM system/ 360 operating system multi-programming with MVT (variable number of tasks).

- The hardware typed memory such as the base and bound registers (GE – 635, PDP – 10 etc.), Burroughs corporation B5500 etc. is used for relocating the memory partitions. 
- The partitions that can be relocated can be compacted to form larger contiguous memory chunks in the main memory. 
- Some systems allow for swapping out the partitions to the secondary storage and in turn to some additional memory.
The partitioned allocation offers two types of allocation techniques namely:
  1. Single partition techniques
  2. Multiple partition techniques

- Single partition techniques are the ones that are used for the single time sharing partition for swapping in and out the memory partitions. 
- These techniques are used by the IBM’s TSO (time sharing option). 
- The multiple partition techniques are used in the multiple time sharing partition. 
- In DOS systems when the disk is partitioned, each of the memory partitions act as if it is an individual disk drive. 
- Partitioning is useful for the systems where there are more than one operating system. 
- Partitioning techniques are meant for increasing the efficiency of the disk. 
Hard and soft partitioning is used on the apple Macintosh computers. 
- The creation, relocation and deletion of the memory partitions can be harmful for the data. 
- That’s why it is good to have back up of the data stored on your system. 
Several issues have to be considered if you want to install more than one operating system on your computer. 
- Day by day disks are becoming less expensive and bigger. 
- You can go for separate disks for storing data and installing Oss. 


Thursday, June 6, 2013

Explain the structure of the operating systems.

We all are addicted to using computers but we all never really bother to known what is actually there inside it i.e., who is operating the whole system. Then something inevitable occurs. Your computer system crashes and the machine is not able to boot. Then you call a software engineer and he tells you that the operating system of the computer has to be reloaded. You are of course familiar with the term operating system but you know what it is exactly. 

About Operating System

- Operating system is the software that actually gives life to the machine. 
- Basic intelligence is the requirement of every computer system to start with. 
Unlike we humans, computers do not have any inborn intelligence. 
- This basic intelligence is required because this is what the system will use to provide essential services for running the programs such as providing access to various peripherals, using the processor and allocation of memory and so on. 
One type of service is also provided by the computer system for the users. 
- As a user, you may require to create, copy or delete files. 
- This is the system that manages the hardware of the computer system. 
- It also sets up a proper environment in which the programs can be executed. 
It is actually an interface between the software and the hardware of the system.
- On booting of the computer, the operating system is loaded in to the main memory. 
- This OS remains active as long as the system is running. 

Structure of Operating Systems

- There are several components of the operating system about which we shall discuss in this article.
- These components make up the structure of the operating system.

1. Communications: 
- Information and data might be exchanged by the processes within the same computer or different computers via a network. 
- This information might be shared via memory if in the same computer system or via message passing if through some computer network. 
- In message passing, the messages are moved by the operating system.

2. Error detection: 
- The operating system has to be alert about all the possible errors that might occur. 
- These errors may occur anywhere ranging from CPU to memory hardware devices in the peripheral devices in the user application. 
- For all types of error, proper action must be taken by the operating system for ensuring that correct and consistent computing takes place. 
- The users and the abilities of the programmers are enhanced greatly by the debugging facilities.

3. Resource allocation: 
- Resources have to be allocated to all of the processes running. 
- A number of resources such as the main memory, file storage, CPU cycles etc have some special allocation code while other resources such as I/O devices may have request and release codes.

4. Accounting: 
- This component is responsible for keeping the track of the computer resources being used and released.

5. Protection and Security: 
- The owners of data and information might want it to be protected and secured against theft and accidental modification.
- Above all, there should be no interference of the processes with working of each other. 
- The protection aspect involves controlling the access to all the resources of the system. 
- Security involves ensuring safety concerning user authentication in order to prevent devices from invalid attempts.

6. Command line interface or CLI: 
- This is the command interpreter that allows for the direct entry of the command. 
- This is either implemented by systems program or by the kernel.
- There are a number of shells also for multiple implementations.

7. Graphical User Interface: 
This is the interface via which the user is actually able to interact with the hardware of the system. 


Thursday, May 30, 2013

What are the various Desk Scheduling methods?

About Disk Scheduling

The I/O system has got the following layers:
  1. User processes: The functions of this layer including making I/O calls, formatting the I/O and spooling.
  2. Device independent software: Functions are naming, blocking, protection, allocating and buffering.
  3. Device drivers: Functions include setting up the device registers and checking their status.
  4. Interrupt handlers: These perform the function of waking up the I/O drivers up on the completion of the I/O.
  5. Hardware: Performing the I/O operations.
- Disk drives can be pictured as large 1 – D array consisting of logical blocks that are smallest unit of transfer.  
- These blocks are mapped in to the disk sectors in a sequential manner. 
Mapping is done in the same manner. 
- The responsibility of using the hardware efficiently is the duty of the operating system for the disk drives for increasing the speed of access and bandwidth of the disk. 

Algorithms for Scheduling Disk Requests

There are several algorithms existing for the scheduling of the disk requests:

Ø  SSTF: 
- In this method the request having the minimum seek time is selected from the present head position. 
- This method is a modification of the SJF (shortest job first) scheduling and therefore contains some possibility of process starvation.

Ø  SCAN: 
- From one end of the disk, the disk arm starts and continues in the direction of the other end, serving to the requests till the opposite end. 
- At this end the head is reversed and the process continues. 
- This is sometimes called as the elevator algorithm.

Ø  C – SCAN: 
- A better algorithm then the previous one. 
- This one offers a more uniform waiting time than the previous one. 
- The movement of the head is from one end to another while it services the requests encountered along the way. 
- However, the difference is that when it comes to the other it straightaway goes to the beginning without heeding to any of the requests in the way and then again starts. 
- The cylinders are treated as the circular list wrapped around last and the first cylinder.

Ø  C – Look: 
- This is a modified version of the C – SCAN. 
- Here the arm or the head travels only up to the last request rather than going till the far end. 
- Then immediately the direction is reversed and the process continues.

- For disk scheduling it is important that the method be selected as per the requirements only. 
- The first one is the most commonly used and appeals to the needs naturally. 
- For a system where often there is a heavy load on the disk, the SCAN and C- SCAN methods can help. 
- The number as well as the kind of requests affects the performance in a number of ways.
- On the other hand, the file – allocation method influences the requests for the disk services. 
- These algorithms have to be written as an individual module of the OS so that if required it can be replaced with a different one easily. 
- As a default algorithm, the LOOK or the SSTF is the most reasonable choice. 

Ways to attach to a disk

There are two ways of attaching the disk:
Ø  Network attached: This attachment is made via a network. This is called the network attached storage. All such connected storage devices together form the storage area network.
Ø  Host attached: This attachment is made via the I/O port.


All these disk scheduling methods are for the optimization of the secondary storage access and for making the whole system efficient. 


Wednesday, May 29, 2013

Explain the various File Access methods?

One of the most important functions of the mainframe operating system is the access methods that make it possible for you to access the data from external devices such as the tape or disk. 

What are access methods?

- Access methods are very useful in providing an API for transferring the data from one device to another.
- Another best thing about this API was that it worked as the device driver for the operating systems on non-mainframe computers. 
- There have been a lot of reasons behind the introduction of the access methods. 
- A special program had to be written for the I/O channel and there has to be a processor entirely dedicated to controlling the access to the peripheral storage device as well as data transfer from and to the physical memory. 
- Special instructions constitute these channel programs and are known as the CCWs or the channel command words.
- To write such programs, very detailed knowledge is required regarding the characteristics of the hardware. 

Benefits of File Access Methods

There are 3 major benefits of the file access methods:
Ø  Ease of programming: The programmer does not have to deal with the procedures of the specific devices, recovery tactics and error detection. A program designed to process a particular thing will do it no matter where the data has been stored.
Ø  Ease of hardware replacement: A program cannot be altered by the programmer during the migration of data from older to newer model of the storage device provided the same access methods are supported by the new model.
Ø  Ease in sharing the data set access: The access methods can be trusted for managing the multiple accesses to the same file. At the same it ensures the security of the system and data integrity.

Some File/Storage Access Methods

Ø  Basic direct access method (BDAM)
Ø  Basic sequential access method (BSAM)
Ø  Queued sequential access method (QSAM)
Ø  Basic partitioned access method (BPAM)
Ø  Indexed sequential access method (ISAM)
Ø  Virtual storage access method (VSAM)
Ø  OAM (object access method)

- For dealing with the records of a data set both the types of access i.e., the queued and the basic are suitable. 
- The queued access methods are an improvement of the basic file access methods. 
- Read ahead scheme and internal blocking of data is well supported by these methods. 
- This allowed combining the multiple records in to one unit, thus increasing the performance. 
- In sequential methods, it is assumed that there’s only a sequential way for processing the records which is just the opposite of the direct access methods. 
There are devices like the magnetic tape that only enforce the sequential access 
- Sequential access can be used for writing a data set and then later the direct manner can be used for processing it.

Today we have access methods that are network-oriented such as the following:
Ø  Basic telecommunications access method or BTAM
Ø  Queued tele – processing access method or QTAM
Ø  Telecommunications access method or TCAM
Ø  Virtual telecommunications access method or VTAM

The term access method was used by the IMS or the IBM information management system for referring to the methods for manipulation of the database records. 
- The access methods used by them are:
Ø  GSAM or generalized sequential access method
Ø  HDAM or hierarchical direct access method
Ø  HIDAM or hierarchical indexed direct access method
Ø  HISAM or hierarchical indexed sequential access method
Ø  HSAM or hierarchical sequential access method
Ø  PHDAM or partitioned hierarchical direct access method
Ø  PHIDAM or partitioned hierarchical indexed direct access



Facebook activity