Subscribe by Email


Showing posts with label Errors. Show all posts
Showing posts with label Errors. Show all posts

Sunday, October 13, 2013

What are two fundamental cryptography principles?

In this article we shall discuss about the two fundamental principles that govern a cryptographic system. 

1. Redundancy
- Some redundancy must be there in all the encrypted messages. 
- By redundancy here, we mean the information that is not required for understanding the message reducing the chances for a passive intruder to make attacks. 
- Passive intruder attacks involve putting the stolen information to misuse without understanding it. 
- This can be more easily understood by an example of a credit card. 
- The credit card number is not alone sent over the internet rather it is accompanied by other side info such as the DOB of the card holder, its validity date and so on. 
- Including such info with the card number cuts down on the changes for making up the number. 
- Adding a good amount of redundancy prevents the active intruders from sending garbage values and then getting it verified as some valid message. 
The recipient must be capable of determining whether the message is valid or not by  doing some inspection and simple calculation. 
- Without redundancy the attackers would simply send junk message and the recipient will decode it as a valid message. 
- However, there is a little concern also with this. 
- N number of zeroes must not be put at the beginning or the end of the message for redundancy because such messages become easy to be predicted thus facilitating the crypt analysts work.
- Instead of zeroes, a CRC polynomial can be used because it proves to be more work. 
- Using cryptographic hash might be even better.
- Redundancy has also got a role to play in quantum cryptography. 
Some redundancy is required in the messages for the bob to determine if the message has been tampered. 
- Repetition of the message twice is a crude form of redundancy.
- If the two copies are found to be identical, the bob states that somebody is interfering with the transmission or there is a lot of noise. 
- But such repetition process to be expensive. 
- Therefore, for error detection and correction the methods used are reed Solomon and hamming codes.

2. Update
- Measures must be compulsorily taken for the prevention of the attacks by active intruders who might play back the old messages. 
- The longer an encrypted message is held by an active intruder, the more is the possibility that he can break in to it. 
- One good example of this is the UNIX password file.
- For anybody who has an account on the host, the password is accessible. 
- A copy of this file can be obtained by the intruders and they can then easily de-crypt the password.
- Also, the addition of the redundancy allows the simplification of the messages’ decryption.
- It must be checked whether the message has been sent recently or is an old one. 
- One measure for doing so is including a time stamp of few seconds in the message. 
- This message then can be saved by the recipient for that many seconds and can be used for comparing with the incoming messages and filtering the duplicates.
- Messages which exceed this time period will be rejected as being too old.

Apart from the above two principles the following are some other principles of cryptography:
Ø Authentication: For ensuring that the message was generated by the sender itself and no one else so that no outsider can claim as being the owner of the message.
Ø Integrity: In cryptography, the integrity of the messages must be preserved while sending the message from one host to another. This involves ensuring that the message is not altered on the way. Using cryptographic hash is a way to achieve this.
Ø  Non-repudiation


Sunday, August 25, 2013

What is the concept of flow control?

- Flow control is an important concept in the field of data communications. 
- This process involves management of the data transmission rate between two communicating nodes. 
- Flow control is important to avoid a slow receiver from being outrun by a fast sender. 
- Using flow control, a mechanism is designed for the receiver using which it can control its speed of transmission.
- This prevents the receiving node from getting overwhelmed with traffic from the node that is transmitting.
- Do not confuse yourself with congestion control and flow control. Both are different concepts. 
- Congestion control comes in to play when in actual there is a problem of network congestion for controlling the data flow. 

On the other hand the mechanism of flow control can be classified in the following two ways:
  1. The feedback is sent to the sending node by the receiving node.
  2. The feedback is not sent to the sending node by the receiving node.
- The sending computer might tend to send the data at a faster rate than what can be received and processed by the other computer. 
- This is why we require flow control. 
- This situation arises when the traffic load is too much up on the receiving computer when compared to the computer that is sending the data. 
- It can also arise when the processing power of the receiving computer is slower than the processing power of the one that is sending the data.

Stop and Wait Flow Control Technique 
- This is the simplest type of the flow control technique. 
- Here, when the receiver is ready to start receiving data from the sender, the message is broken down in to a number of frames. 
- The sending system then waits for a specific time to get an acknowledgement or ACK from the receiver after sending each frame. 
- The purpose of the acknowledgement signal is to make sure that the frame has been received properly. 
- If during the transmission a packet or frame gets lost, then it has to be re-transmitted. 
- We call this process as the automatic repeat request or ARQ. 
- This technique has a problem which is that it is capable of transmitting only one frame in one go. 
- This makes the transmission channel very inefficient. 
- Therefore, until and unless the sender gets an acknowledgement it will not proceed further for transmitting another packet. 
- Both the transmission channel and the sender are left un-utilized during this period. 
- Simplicity of this method is its biggest advantage. 
- Disadvantage is the inefficiency resulting because of this simplicity. 
- Waiting state of the sender creates inefficiency. 
- This happens usually when the transmission delay is shorter than the propagation delay. 
- Sending longer transmissions is another cause for inefficiencies. 
- Also, it increases the chance for the errors to creep in this protocol. 
- In short messages, it is quite easy to detect the errors early. 
- By breaking down one big message in to various separate smaller frames, the inefficiency increases. 
- This is so because these pieces altogether take a long to be transmitted.


Sliding window Flow Control Technique 
- This is another method of flow control where permission is given to the sender by the receiver for continuously transmitting data until a window is filled up. 
- Once the window is full, sender stops transmission until a larger window is advertised. 
- This method can be utilized in a better way if the size of the buffer is kept limited. 
- During the transmission, space for say n frames is allocated to the buffer. 
This means n frames can be accepted by the receiver without having to wait for ACK. 
- After n frames an ACK is sent consisting of the sequence number of the next frame that has to be sent. 


Saturday, July 13, 2013

Sliding Window Protocols? - Part 3

In the third part of this article we shall discuss about the types of sliding window protocols and how these protocols can be extended?

1. Stop and Wait: 
- This one is the simplest type among all the sliding window protocols. 
- Under this type, we have the stop–and–wait ARQ protocol as the simplest implementation.
- Both the transmit window and the receive window is 1 packet and the number of possible sequence numbers required is 2 i.e., 1+1 = 2. 
- The packets sent by the transmitter are marked alternatively as odd and even. 
- Therefore, the ACK packets are in the series of odd, even, odd, even and so on. 
- Now, suppose the transmitter sends an odd packet and immediately without waiting for an odd ACK sends the next even packet. 
- In such a case, it would receive an ACK saying that an odd packet is expected. 
- This leaves the transmitter in a state of ambiguity i.e., whether the receiver got both the packets or none of them.

2. Go-Back-N ARQ: 
- This sliding window protocol has a fixed w(fixed at 1) and wr  which is always greater than one. 
- Here, the receiver will not accept any packet other than the expected one from the sequence. 
- If the packet gets damaged or lost during the transmission, then the packets following the lost ne will not be accepted by the receiver until and unless it receives the lost one after re-transmission. 
- This ensures minimum loss of 1 RTT (round trip time). 
- This is why it results in inefficiency in using this protocol on the links where the packet loss is quite frequent. 
- Suppose a 3 bit sequence number is being used as in typical HDLC. 
- This means number of sequence numbers is 8 starting from 0 to 7. 
- This also means we have 8 possibilities. 
- Enough ACK information is required by the transmitter for distinguishing between those packets. 
- If 8 packets are sent back to back by the transmitter without stopping for ACK, then it will find itself in the same doubt as in the stop-and-wait case.

3. Selective repeat ARQ: 
- This one is the most general case of the sliding window protocols. 
- It works with a receiver that is more capable of accepting packets having the sequence numbers greater than what the current nr is and storing them till the gap is filled. 
- The advantage is that discarding data before re-transmission is not necessary.

Ways to extend these protocols

  1. The above types of the sliding window protocols don’t talk about reordering the packets after receiving them all. This will ensure that they don’t appear in wrong order. If the long distance can be bounded, the protocols can be extended to support this feature. The maximum mis- ordering distance can be used to expand the sequence number modulus N.
  2. Not acknowledging every packet is also possible after the sending an ACK up on not receiving packets. For example, every 2nd packet is acknowledged in TCP.
  3. Informing the transmitter immediately about the presence of gap in the packet sequence is quite common and so HDLC uses a packet called REJ packet for this purpose.
  4. During the communication, the window sizes may change if their sum remains in the limit defined by N. usually the transmit window size is reduced for slowing down the transmission in order to keep with the speed of the links and preventing congestion or saturation. 


Sliding Window Protocols? - Part 2

As discussed in part 1 of this article, the sliding window protocol is a type of the packet based data transmission protocol. The sliding window protocols are used in regulating the reliability factor of the data transmission. 
In this second part we discuss about the motivation behind this protocol and how it actually operates. 
- There are a number of communication protocols based up on the automatic repeat request for regulating the error control. 
- In such protocols, it becomes necessary for the receiver for acknowledging about the packets it received. 
- If the receiver does not send an ACK to the transmitter within a specified time period, then the transmitter assumes that the packet might have got lost, re-transmits it. 
- It is obvious that if a transmitter does not receives an ACK for the packet it had sent cannot actually know if the packet got delivered correctly. 
- If, suppose corruption is detected during error detection process on the receiver’s side; the receiver will simply ignore this packet and hence, will not send any ACK to the transmitter. 
- Now in the same way, the receiver also does not know whether the ACK it sent was received by the transmitter or it got lost or damaged during the transmission.
-  In such a case, the re-transmission must be acknowledged by the receiver in order to prevent the continuous re-sending of the data by the transmission. 
- In other cases it is simply ignored.

How the protocol operates?
- The current sequence numbers say nt and nr is assigned to transmitter and receiver respectively. 
- Both of them have their window sizes say wt and wr respectively.
- In simple implementations of the protocols these sizes are fixed however, they may vary when it comes to the larger and complex implementations. 
- For making any progress, it is necessary that the size of the window must be more than zero. 
- In a typical implementation, nt denotes the packet to be transmitted. 
Similarly nr denotes the packet not received. 
- Both of these numbers increase with the time monotonically. 
- The receiver also has to keep an eye on the highest sequence number that has not been received yet. 
- We have another variable called ns which is one number greater than the highest sequence number that has been received. 
- There are simple receivers which accept the packets only in the order of wr = 1 which is nothing but same as the nr
- But in some cases it can exceed 1. 
Now we can say that:
1.     Below nr no packets have been received.
2.     Above ns no packets have been received.
3.     It is only between nr and ns that some packets have been received.
- Whenever a packet is received, its variables are updated appropriately by the receiver and at the same time an ACK is transmitted with the updated value of nr
- Similarly we have variable na used by the transmitter for tracking the highest ACK it has received. 
- Below na all the packets have been received but there is uncertainty about the packets between ns and na i.e, the nr
- There are certain rules that are always obeyed by the sequence numbers:
Ø  Na ≤ nr: The highest ACK the transmitter has received cannot exceed the highest nr recorded by the receiver.
Ø  Nr ≤ ns: The partially received packets’ end cannot be greater than the span of those fully received.
Ø  Ns ≤ nt: The highest packet sent is always greater than the highest packet received.
Ø  Nt ≤ na + wt: The highest ACK received and the window size set the limits for the highest packet sent.



Sunday, July 7, 2013

Differentiate between persistent and non-persistent CSMA?

- CSMA or Carrier Sense Multiple Access makes use of LBT or listen before technique before making any transmission. 
- It senses the channel for its status and if found free or idle, the data frames are transmitted otherwise the transmission is deferred till the channel becomes idle again. 
- In simple words, we can say that CSMA is an analogy to human behavior of not interrupting others when busy. 
- There are number of protocols out which the persistent and the non – persistent are the major ones. 
- CSMA is based on the idea that if the state of the channel can be listened or sensed prior to transmitting a packet, better throughput can be achieved.
- Also, using this methodology a number of collisions can be avoided. 
- However, it is necessary to make the following assumptions in CSMA technology:
  1. The length of the packets is constant.
  2. The errors can only be caused by collisions except which there are no errors.
  3. Capture effect is absent.
  4. The transmissions made by all the other hosts can be sensed by each of the hosts.
  5. The transmission time is always greater than the propagation delay.
About Persistent CSMA
- This protocol first senses the transmission channel and acts accordingly. 
- If the channel is found to be occupied by some other transmission, it keeps listening or sensing the channel and as soon as the channel becomes free or idle, starts its transmission. 
- On the other hand, if the channel is found empty, then it does not wait and starts transmitting immediately. 
- There are possibilities of collisions. 
- If one occurs, the transmitter must wait for random time duration and start again with the transmission. 
- It has a type called 1 – persistent protocol which makes transmission of probability 1 whenever the channel is idle. 
- In persistent CSMA there are possibilities of occurrence of collisions even if the propagation delay is 0. 
- However, collisions can only be avoided if the stations do not act so greedy. 
We can say that this CSMA protocol is aggressive and selfish. 
- There is another type of this protocol called the P – persistent CSMA. 
This is the most optimal strategy. 
- Here the channels are assumed to be slotted where one slot equals the period of contention i.e., 1 RTT delay. 
- The protocol has been named so because it transmits the packet with probability p if the channel is idle otherwise it waits for one slot and then transmits.

About Non–Persistent CSMA
- It is deferential and less aggressive when compared to its persistent counterpart. 
- It senses the channel and if it is busy it just waits and then again after sometime senses the channel unlike persistent CSMA which keeps on sensing the channel continuously. 
- As and when the channel is found free, the data packet is transmitted immediately. 
- If there occurs a collision it waits and starts again.
- In this protocol, even if the two stations become greedy in midst of transmission of some other station they do not collide probably whereas, in persistent CSMA they collide.
- Also, if only one of the stations become greedy in midst of some other transmission in progress, it has no choice but to wait. 
- In persistent CSMA this greedy stations takes over the channel up on completion of the current transmission.
Using non – persistent CSMA can reduce the number of collisions whereas persistent CSMA only increases the risk. 
- But the non – persistent CSMA is less efficient when compared to the persistent CSMA.
- Efficiency lies in the ability of the protocols of detecting the collisions before starting the transmission. 


Monday, July 1, 2013

What is the difference between TCP and UDP?

TCP (transmission control protocol) and UDP (user datagram protocol) are two very important protocols. These two protocols are transportation protocol.  These two protocols are counted in the core protocols of the IP suite. 
These two protocols operate at the 4th layer i.e., the transport layer of the TCP/ IP model but the usage of the both the protocols is used very differently.

1. Reliability: 
- UDP is a connection-less protocol whereas TCP is a protocol that is connection  oriented. 
- Whenever a message is sent, it will not get delivered if there is a connection failure. 
- If during the delivery of the message the connection gets lost, the server will send a request to get the lost part. 
- During the transfer of a message, there is no corruption. 
- The reliability of the UDP is less and if a message is sent, there is no guarantee that it will get delivered, it may get lost on the way. 
- The message might get corrupted during transfer.

2. Ordered: 
- If at the same time two messages are sent along the same connection, one after the other, it is sure that the message which is the first in the line will get delivered there first. 
- The data is therefore delivered always in the same order. 
- You do not have to worry about the order of the arriving data. 
- In the case of UDP, the order of the arrival of data is not sure. 
- The second one can arrive there first before the first one. 

3. Heavyweight: 
- When the order of arrival of the low level parts of the transmission stream is wrong, the requests have to be sent again and again. 
- All the lost parts of the message have to be put together in a proper sequence. 
- So it takes some time for putting back the parts together. 
- On the other hand, UDP is lightweight.
- After sending the message, the user cannot think about tracking connections or ordering of the messages etc. 
- This indeed makes it lot quicker and therefore there is very less work for the network or the OS card for translating the data obtained from the data packets.

4. Streaming: 
- In TCP, the data is read in form of a stream.
- There is nothing that distinguishes one data from another. 
- Per read call there can be a number of packets. 
- In UDP each of the data packets is sent individually and if they arrive, they do so in whole form. 
- Here per read call, only one packet is sent.

5. Some examples of the TCP are FTP or file transfer protocol, World Wide Web (such as Apache TCP port 80), secure shell such as open SSH port 22 and so on. Examples of UDP are TFTP (trivial file transfer protocol), VoIP (voice over IP), IPTV, online multiplayer games, domain name system (such as DNS UDP port 53) etc.

6. Error-checking: 
- The TCP protocol offers extensive error checking mechanisms for the acknowledgement of the data, flow control and so on. 


In TCP, a connection is a must to be established in order to transfer data. Datagram mode is the mode in which the user data gram protocol operates. You can choose between the two protocols depending up on the requirements. If the guaranteed delivery of data is required, then the transmission control protocol must be chosen. The user data gram protocol comes only with the basic error checking mechanism. It checks the data by the means of the check sums. 


Thursday, June 6, 2013

Explain the structure of the operating systems.

We all are addicted to using computers but we all never really bother to known what is actually there inside it i.e., who is operating the whole system. Then something inevitable occurs. Your computer system crashes and the machine is not able to boot. Then you call a software engineer and he tells you that the operating system of the computer has to be reloaded. You are of course familiar with the term operating system but you know what it is exactly. 

About Operating System

- Operating system is the software that actually gives life to the machine. 
- Basic intelligence is the requirement of every computer system to start with. 
Unlike we humans, computers do not have any inborn intelligence. 
- This basic intelligence is required because this is what the system will use to provide essential services for running the programs such as providing access to various peripherals, using the processor and allocation of memory and so on. 
One type of service is also provided by the computer system for the users. 
- As a user, you may require to create, copy or delete files. 
- This is the system that manages the hardware of the computer system. 
- It also sets up a proper environment in which the programs can be executed. 
It is actually an interface between the software and the hardware of the system.
- On booting of the computer, the operating system is loaded in to the main memory. 
- This OS remains active as long as the system is running. 

Structure of Operating Systems

- There are several components of the operating system about which we shall discuss in this article.
- These components make up the structure of the operating system.

1. Communications: 
- Information and data might be exchanged by the processes within the same computer or different computers via a network. 
- This information might be shared via memory if in the same computer system or via message passing if through some computer network. 
- In message passing, the messages are moved by the operating system.

2. Error detection: 
- The operating system has to be alert about all the possible errors that might occur. 
- These errors may occur anywhere ranging from CPU to memory hardware devices in the peripheral devices in the user application. 
- For all types of error, proper action must be taken by the operating system for ensuring that correct and consistent computing takes place. 
- The users and the abilities of the programmers are enhanced greatly by the debugging facilities.

3. Resource allocation: 
- Resources have to be allocated to all of the processes running. 
- A number of resources such as the main memory, file storage, CPU cycles etc have some special allocation code while other resources such as I/O devices may have request and release codes.

4. Accounting: 
- This component is responsible for keeping the track of the computer resources being used and released.

5. Protection and Security: 
- The owners of data and information might want it to be protected and secured against theft and accidental modification.
- Above all, there should be no interference of the processes with working of each other. 
- The protection aspect involves controlling the access to all the resources of the system. 
- Security involves ensuring safety concerning user authentication in order to prevent devices from invalid attempts.

6. Command line interface or CLI: 
- This is the command interpreter that allows for the direct entry of the command. 
- This is either implemented by systems program or by the kernel.
- There are a number of shells also for multiple implementations.

7. Graphical User Interface: 
This is the interface via which the user is actually able to interact with the hardware of the system. 


Friday, April 19, 2013

What is Paging? Why it is used?


- Paging is a very important concept for the computer operating systems required for managing the memory. 
- It is essentially a memory management scheme which is used for storing as well as retrieving data from the secondary memory devices.
- Under this scheme, the data is retrieved from the secondary storage devices and handed over to the operating systems. 
- The data is in the form of blocks all having the same size. 
- These data blocks are called as the pages. 
- In paging, for a process the physical address space can be kept as non–contiguous itself. 
- Paging is a very important concept for implementing the virtual memory in the operating systems designed for contemporary and general use. 
- This allows the disk storage to be used for the data that is not able to fit in to the RAM. 
- The main functions of the paging technique are carried out when a program attempts to access the pages that have no mapping to the physical RAM. 
- This situation is commonly known as the page fault. 
- In this situation, the OS comes to take control of the error. 
- This is done in a way that is invisible to the application. 

The operating system carries out the following tasks in paging:
Ø  Locates the data address in an auxiliary storage.
Ø Obtains a vacant page frame in the physical memory to be used for storing the data.
Ø  Loads the data requested by the application in to the page frame obtained in the previous step.
Ø  Make updates to the page table for showing the new data.
Ø Gives back the execution control to the program.This maintains a transparency. it again tries to execute the instruction because of which the fault occurred.

- If space is not available on RAM for storing all the requested data, then another page from RAM cannot be removed. 
- If all of the page frames are filled up, then a page frame can be obtained from the table which contains data that will be shortly emptied. 
- A page frame is said to become dirty if it is modified since its last read operation in to the RAM. 
- In such a case it has to be written back in to its original location in the drive before it is freed. 
- If this is not done, a fault will occur which will require obtaining an empty frame and reading the contents from drive in to this page. 
- The paging systems must be efficient so as to determine which frames are to be emptied. 
- Presently many page replacement algorithms have been designed for accomplishing this task. 
- Some of the mostly used for replacement are:
Ø  LRU or least recently used
Ø  FIFO or first in first out
Ø  LFU or least frequently used.

- To further increase responsiveness, paging systems may employ various strategies to predict which pages will be needed soon. 
- Such systems will attempt to load pages into main memory preemptively, before a program references them. 
- When demand paging is used, paging takes place only when some data request and not prior to it. 
- In a demand pager, execution of a program begins with none of the pages loaded in to the RAM. 


Facebook activity