Subscribe by Email


Showing posts with label Transfer. Show all posts
Showing posts with label Transfer. Show all posts

Monday, August 12, 2013

What are different methods of broadcasting a packet?

Without broadcasting, our information theory and telecommunications does not mean anything. 
- It is broadcasting that actually makes the transfer of data possible from one point to another. 
- Broadcasting can be defined as the method of transfer of a message to a number of recipients, all at the same time. 
- Broadcasting is often considered to be a sort of high – level operation in some programs while low level operation in some other programs. 
- For example, in message passing interface, broadcasting is a high level operation whereas in broadcasting on Ethernet, it is considered to be a low level operation in networking. 

We have many kinds of routing schemes suiting for various kinds of broadcasting requirements:
  1. Anycast
  2. Broadcast
  3. Multicast
  4. Unicast
  5. Geocast
- Broadcasting is transmission of a packet to each and every device that is attached to the network. 
- However, the broadcasting is limited to transmission in the broadcast domain in practical applications. 
- Broadcasting can be contrasted with uni cast routing scheme in the sense that in uni cast, the datagrams are transmitted by one host and are received by another single host only. 
- This receiving host is identified with an IP address on the network that is unique to it. 
- All the technologies used in networking are not capable of supporting broadcasting. 

For example, the following do not have this capability:
Ø  X.25 relay
Ø  Frame relay

- The broadcast method cannot be implement with IPv6 i.e., the successor of the IPv4 (internet protocol version 4).
- This is for the avoidance of the disturbance to the nodes. 
- Also, there does not exist anything such as the internet wide broadcast. 
Therefore, this limits the scope of the broadcasting to the LAN technologies such as token ring and Ethernet since here the impact of the broadcasting performance is small.

Categories of Broadcasting Methods

The broadcasting methods can be classified in to 4 major categories as per the IEEE 802.11 standard:

1. Simple flooding method: 
- In this method the packets are rebroadcast-ed by each of the nodes.
- A message is disseminated to all the neighboring nodes by a source node in the MANET. 
- If the neighboring nodes would have received this message already, then this time the message will be dropped.
- If not, they will re-disseminate the message to their neighbors simultaneously. 
- This process continues until all the nodes have received this message. 
- Only for a MANET this method proves to be reliable that too only if the nodes have low density as well as high mobility. 
- This method has a good potential for harming the network and make it unproductive. 
- This happens so because it will cause congestion in the network thereby exhausting the power of the battery.

2. Area based broadcasting method: 
- Here, we assume a transmission distance.
- Only if sufficient coverage area is detected, the node can rebroadcast otherwise not. 
- This method can be of two types namely location based scheme and the distance based scheme.

3. Probability tested: 
- Rebroadcasting is done by the nodes depending up on the network’s topology and probabilities assigned to them. 
- This somewhat resembles the flooding algorithm with the only exception that a predetermined probability is used for rebroadcasting by the nodes.
- The transmission coverage might be shared by the multiple nodes where the network is too dense.

4. Neighborhood based broadcasting method: 
- Neighborhood method is used for maintaining a state in the neighborhood and rebroadcasting is done with the help of the info obtained from the nodes in this neighboring area. 
- There are two types of this method namely self-pruning approach and ad hoc broadcasting approach.      


Friday, July 19, 2013

What are the goals and properties of a routing algorithm?

Routing requires the use of routing algorithms for the construction of the routing tables.
A number of routing algorithms are today available with us such as:
1.   Distance vector algorithm (bellman ford algorithm)
2.   Link state algorithm
3.   Optimized link state routing algorithm (OLSR)
- In a number of web applications, there are a number of nodes which require communicating with each other via communication channels. 
- Few examples of such applications are telecommunication networks (such as POTS/ PSTN, internet, mobile phone networks, and local area networks), distributed applications, multiprocessor computers etc. 
- All nodes cannot be connected to each other since doing so will require many high powered transceivers, wires and cables. 
- Therefore, the implementation is such that the transmissions of nodes are forwarded by the other nodes till the data or info reaches its correct destination. 
- Thus, routing is the process of determining where the packets have to be forwarded and doing so.

Properties of Routing Algorithm
- The packets must reach their destination if there are no factors preventing this such as congestion.
- The transmission of data should be quick.
- There should be high efficiency in the data transfer.
- All the computations involved must not be long. They should be as easy and quick as possible.
- The routing algorithm must be capable of adapting to the two factors i.e., changing load and changes in topology (this includes the channels that are new and the deleted ones.)
- All the different users must be treated fairly by the routing algorithm.
The second and the third properties can be achieved using fastest or the shortest route algorithms. 
- Graphical representation of the network is a crucial part of the routing process.
- Each network node is represented by a vertex in the graph whereas an edge represents a connection or a link between the two nodes. 
- The cost of each link is represented as the weight of the edge in the graph. 
- There are 3 typical weight functions as mentioned below:
1.   Minimum hops: The weight of all the edges in the graph is same.
2.  Shortest path: The weight of all the edges is a constant non – negative value.
3.   Minimum delay: The weight of every edge depends up on the traffic on its link and is a non – negative value.
However in real networks, the weights are always positive.

Goals of Routing Algorithms
- The goal of these routing algorithms is to find the shortest path based up on some specified relationships that if used will result in the maximum routing efficiency. 
- Another point is to use as minimum information as possible.
- Goal of the routing algorithm is also to keep the routing tables update with all alternative paths so that if one fails, the other one can be used.
- The channel or the path that fails is removed from the table. 
- The routing algorithms need to be stable in order to provide meaningful results but at the same time is quite difficult to detect the stable state of an algorithm. 
- Choosing a routing algorithm is like choosing different horses for different courses. 
- The frequency of the changes in the network is one thing to be considered. 
Other things to be considered include the cost function that is needed to be minimized and the calculation of the routing tables in a centralized fashion.
- For static networks the routing tables are fixed and therefore they require only simple routing algorithms for calculation. 
- On the other hand, the networks that are dynamic nature require distributed routing algorithms which are of course complex.



Wednesday, July 10, 2013

Explain the concept of piggybacking?

- Piggybacking is a well known technique used in the transmission of data in the third layer of the OSI model i.e., the network layer. 
- It is employed in making a majority of the frames that are transmitted from receiver to the emitter. 
- It adds to the data frame, the confirmation that the sender sent on successful delivery of data frame. 
- This confirmation is called the ACK or acknowledge signal. 
- Practically, this ACK signal is piggybacked on the data frame rather than sending it individually by some other means. 

Principle behind Piggybacking
- The piggybacking technique should not be confused with the sliding window protocols that are also employed in the OSI model. 
- In piggybacking, an additional field for the ACK or the acknowledgement signal is incorporated in to the data frame itself. 
- There is only a difference of bit between the sliding window protocol and piggybacking.
- Whenever some data has to be sent from party to another, the data will be sent along with the field for ACK. 

The piggybacking data transfer is governed by the following three rules:
Ø  If both the data as well as the acknowledgement have to be sent by the party A, it has to include both the fields in the same frame.
Ø  If only the acknowledgement has to be sent by the party A, then it will have use a separate frame i.e., an ACK for that.
Ø  If only the data has to be by the party A, then the ACK field will be included within the data frame and thus transmitted along with it. This duplicate ACK frame is simply ignored by the receiving party B.

- The only advantage of using this technique is that it helps in improving efficiency. 
- The disadvantage is that is the service can be blocked or jammed by the receiving party if there is no data to be transmitted. 
- Enabling a receiver timeout by means of a counter the moment when the party receives the data frame can solve this problem to a great extent. 
- An ACK control frame will be sent by the receiver if the timeout occurs and still there is no data for transfer. 
- A counter called the emitter timeout is also set up by the sender which if ends without getting any confirmation from the receiver will make the sender assume that the data packet got lost in the way and therefore will have to re-transmitted.

- Piggybacking is also used in accessing the internet.
- It is used in establishment of a wireless internet connection by means of wireless internet access service of the subscriber without taking explicit permission from the subscriber. 
- However, according to the various jurisdiction laws around the world, this practice is under ethical and legal controversy. 
- In some places it is completely regulated or outlawed while at other places it is allowed.  
- A business customer who provides services related to hotspots, as of cafe and hotels, cannot be thought of using piggybacking technique via non – customers. - A number of such locations provide services for a fee. 


Wednesday, May 29, 2013

Explain the various File Access methods?

One of the most important functions of the mainframe operating system is the access methods that make it possible for you to access the data from external devices such as the tape or disk. 

What are access methods?

- Access methods are very useful in providing an API for transferring the data from one device to another.
- Another best thing about this API was that it worked as the device driver for the operating systems on non-mainframe computers. 
- There have been a lot of reasons behind the introduction of the access methods. 
- A special program had to be written for the I/O channel and there has to be a processor entirely dedicated to controlling the access to the peripheral storage device as well as data transfer from and to the physical memory. 
- Special instructions constitute these channel programs and are known as the CCWs or the channel command words.
- To write such programs, very detailed knowledge is required regarding the characteristics of the hardware. 

Benefits of File Access Methods

There are 3 major benefits of the file access methods:
Ø  Ease of programming: The programmer does not have to deal with the procedures of the specific devices, recovery tactics and error detection. A program designed to process a particular thing will do it no matter where the data has been stored.
Ø  Ease of hardware replacement: A program cannot be altered by the programmer during the migration of data from older to newer model of the storage device provided the same access methods are supported by the new model.
Ø  Ease in sharing the data set access: The access methods can be trusted for managing the multiple accesses to the same file. At the same it ensures the security of the system and data integrity.

Some File/Storage Access Methods

Ø  Basic direct access method (BDAM)
Ø  Basic sequential access method (BSAM)
Ø  Queued sequential access method (QSAM)
Ø  Basic partitioned access method (BPAM)
Ø  Indexed sequential access method (ISAM)
Ø  Virtual storage access method (VSAM)
Ø  OAM (object access method)

- For dealing with the records of a data set both the types of access i.e., the queued and the basic are suitable. 
- The queued access methods are an improvement of the basic file access methods. 
- Read ahead scheme and internal blocking of data is well supported by these methods. 
- This allowed combining the multiple records in to one unit, thus increasing the performance. 
- In sequential methods, it is assumed that there’s only a sequential way for processing the records which is just the opposite of the direct access methods. 
There are devices like the magnetic tape that only enforce the sequential access 
- Sequential access can be used for writing a data set and then later the direct manner can be used for processing it.

Today we have access methods that are network-oriented such as the following:
Ø  Basic telecommunications access method or BTAM
Ø  Queued tele – processing access method or QTAM
Ø  Telecommunications access method or TCAM
Ø  Virtual telecommunications access method or VTAM

The term access method was used by the IMS or the IBM information management system for referring to the methods for manipulation of the database records. 
- The access methods used by them are:
Ø  GSAM or generalized sequential access method
Ø  HDAM or hierarchical direct access method
Ø  HIDAM or hierarchical indexed direct access method
Ø  HISAM or hierarchical indexed sequential access method
Ø  HSAM or hierarchical sequential access method
Ø  PHDAM or partitioned hierarchical direct access method
Ø  PHIDAM or partitioned hierarchical indexed direct access



Thursday, March 28, 2013

What is the basic principle behind Dynamic synchronous transfer mode (DTM)?


- Dynamic synchronous transfer mode or DTM is one of the most interesting of all the networking technologies. 
- The basic objective behind implementing this technology is to achieve high speed networking along with the transmissions of top quality.
- It also possesses the ability of adapting the bandwidth in varying traffic conditions quickly. 
- DTM was designed with the purpose of being used in integrated service networks including both the one to one communication and distribution.
- Furthermore, it can be used in application to application communication. 
- Nowadays, it has also found its use as a carrier for IP protocols (i.e., high layer protocols). 
- DTM is a combination of 2 basic technologies namely packet switching and circuit switching. 
- It is because of this that the DTM has many advantages to offer. 
- It also comes with a number of services access solutions for the following fields:
Ø  City networks
Ø  Enterprises
Ø  Residential as well as other small offices
Ø  Content providers
Ø  Video production networks
Ø  Mobile network operators

Principles of Dynamic synchronous transfer mode (DTM)

 
- This mode has been designed to work up on a unidirectional medium. 
- This medium also supports multiple access i.e., all the connected nodes can share it. 
- It can be built up on various topologies such as:
  1. Ring
  2. Double ring
  3. Point – to – point
  4. Dual bus and so on.
- TDM or time division multiplexing is what up on which the DTM is based. 
- Here, a fiber link’s transmission capacity is broken down in to smaller units of time. 
- The total link capacity is broken down in to frames of fixed size of 125 microseconds. 
The frames are then further subjected to division in to time slots of 64 bit. 
- How many time slots will be there in one frame is determined by its bit rate. 
- These time slots consist of many separate control slots and data slots. 
- In some cases more control slots might be required, then the data slots can be turned in to control slots or vice versa.
- The nodes that are attached to the link possess the right to write both the kinds of slots. 
As a consequence of this, same time slot position will be occupied by the all the time slots within each frame. 
- Each node possesses the right to at least one slot which can be used by the node for transmitting control messages to the other nodes. 
- These messages can also be sent when requested by the user as a response to messages sent by the other nodes or for some purpose of network management.
- A small fraction of the whole capacity is constituted by the control slots, while a major part is taken by the data slots that carry payload. 
- With the number of control slots, the signaling overhead in DTM varies though it is usually very low.
- Whenever a communication channel is established, a portion of the available data slots is allocated to the channel by the node. 
- There has been an increasing demand of the network transfer capacity because of the globalization of the network traffic and integrated audio, video and data transmission. 
Optical fibers’ transmission capacity is increasing by great margins when compared to any other processing power. 
- DTM still holds the promise for providing full control to the network resources.


Sunday, March 10, 2013

What is meant by Quantum Cryptography?


The quantum mechanical effects when used for carrying out cryptography tasks are called the quantum cryptography. This technology is also used for breaking the cryptographic systems. And the quantum mechanical effects used include:
1. The quantum computation and
2. The quantum communication

Some very popular examples of uses of quantum cryptography are as follows:
1. For the secure exchange of the quantum key distribution or key
2. Use of quantum computers that used for breaking in to the systems using signature schemes such as ElGamal and RSA and public – key encryption.

The major advantage of quantum cryptography is that by using it, a number of cryptographic tasks are completed that are almost impossible to be completed through the classical communication i.e., the non – quantum effects.

Applications of Quantum Cryptography


1. Quantum key distribution:
- This is the most widely used application.
- It can be described as the use of quantum communication for establishing a key that is shared by two parties (usually referred to as ‘Alice’ and ‘bob’) without involvement of a third party (called Eve) knowing anything regarding the key, even if it eavesdrop on the communication between the two parties.
This happens as follows:
- The bits of the key are encoded by Alice as quantum data and are sent to Bob.
- Now if Eve eavesdrops, the message will be disturbed, making Alice and Bob know about it.
- Thus, we can say that this is a form of encrypted communication.
- Further, the QKD’s security is proven mathematically without restricting the eavesdropper’s abilities.
- In CKD (classical key distribution) this is not possible.
- This is commonly known as the ‘unconditional security’.
- However, the laws of quantum mechanics also apply and there is a need for Alice and Bob to authenticate each other. - It should not be possible for Eve to impersonate as Alice or Bob.
- This can lead to man – in – the – middle – attack.

2. Quantum commitment:
- This was another task that the researchers tried to achieve with the unconditional security offered by QKD.
- Quantum commitment is actually a scheme in which apart Alice can fix a certain value i.e., to commit where it cannot be changed by Alice anymore and it is ensured that Bob won’t learn anything about it, until and unless it is decided by Alice to be revealed to Bob.
- The most common use of these schemes is in the cryptographic protocols.
- Oblivious transfers can be performed by constructing an unconditionally secure protocol from a quantum channel and commitment.
- With such transfers any distributed computations can be implemented securely.

3. Bounded – and noisy – quantum – storage model (BQSM):
- This model provides a possible way for constructing quantum commitment and OTs that unconditionally secure.
- This model assumes that a known constant Q limits the amount of quantum data stored by an adversary.
- However, no limit is imposed up on the classical data.
Idea behind this model is:
- The number of quantum bits exchanged by the involving parties is more than Q.
- This amount of information cannot be stored even by a dishonest party since the memory limit of adversary if Q quantum bits.
- This will lead it to 2 options: either discard the data or measure it.
- So now the OTs can be implemented.


Thursday, March 7, 2013

What is meant by Holographic Data Storage?


Currently the conventional magnetic and optical data storage dominates the field of high – capacity data storage. But another technology called the holographic data storage holds the potential to lead this area. Now what is this technology? We shall discuss about it in this article. 

Difference between Conventional storage methods and Holographic Data Storage

- The conventional methods such as the optical storage and magnetic storage technologies depend up on the recording of the individual data bits on the distinct optical and magnetic on the medium’s surface. 
- In the holographic technology, the information is recorded throughout the medium’s volume.
- Multiple images can be recorded in the same medium area via utilization of the light at varying angles. 
- Further, in the conventional storage methods the recording takes place in a linear fashion whereas, 
- In holographic storage, millions of bits can be recorded and read in parallel thus increasing the rates of the data transfer more than what are offered by the conventional methods.

Features of Holographic Data Storage

Data Recording: 
- The information is stored in a thick photosensitive optical substance with the help of an optical interference pattern. 
- A laser beam is divided into two un-identical optical patterns (formed of light and dark pixels) and is projected towards the medium. 
- A multitude of holograms in one volume is recorded by making adjustments in the wavelength, reference beam angle, media position etc.

Data Reading: 
- For reading the stored data, the same reference beam is reproduced for the creation of the hologram. 
- This light beam is focused up on the photosensitive material that illuminates the interference pattern which leads to the diffraction of the light. 
- The pattern is then projected to the detector. 
- The data is then read in parallel at a rate more than that of millions. 
- This means high data transfer rate. 
- It takes less than 0.2 seconds for accessing the information from a holographic drive. 
- Holographic data storage has offered a solution to many companies for preserving and archiving the information. 
- The WORM (write – once, read many) approach provides assurance about the content security, risk of modification and overwriting of the information. 
- This technology provides hope for storage of data without degradation for 50 plus years which is quite more than the current options.
- However, if the same trend is followed, and it becomes possible to store data for 50 – 100 years in same format, that would still be irrelevant. 
- This is so because the format will be changing in less than 10 years.  

Types of Holographic Media

- Holographic media is of two types namely:
  1.  The re-writable media and
  2. The write once medium.
- In the former type, the changes can be reversed but in the latter can the changes are irreversible.

There is no point in the competition between the holographic storage and the hard drives since the former can find a market that is based up on the virtues like access speed. 
- The holographic data storage technology does seems to have a future in the video game market. 
In the year of 2009, the GE Global Research did came up with their own demonstration of the holographic storage medium that could stand the discs that had read mechanisms somewhat similar to that of the blu–ray disc players. 


Tuesday, July 17, 2012

What is the difference between HTTP and HTTPS?


HTTP is quite a common language with us and stands for hyper text transfer protocol. This is actually an application protocol that has been developed exclusively for the hyper media, collaborative and distributed information systems. 

The foundation of the data communication is entirely based upon www or World Wide Web.  
Now what is HTTPS? HTTPS is nothing but HTTP secure! This one is much more secure than usual hyper text transfer protocol. And like HTTP, it is also a quite popular communication protocol for a much secure communication over a network of computers. It is quite popular with internet. 
If you see it technically, it is actually not a protocol in itself but rather a layered form of HTTP along with SSL/ TLS protocol. This allows the security capabilities of the SSL/ TLS to be added to the HTTP.  In this article we hold up to discuss the differences between the two i.e., the HTTP and HTTPS.

Difference #1:
- The transmission and receiving of the information across a computer network or internet is solely the responsibility of the HTTP.   
- HTTPS holds the responsibility of exchanging confidential information among the servers and also since the access to such information has to be secured to prevent it from any un- authorized access.

Difference #2:
-The transmission of HTTP takes place through a wire via PORT 80(TCP) but it is not at all secure! Some can easily interfere in the communication between your system and the server. 
HTTPS is a creation of the NetScape and it comes as a built in thing with the netscape browser that uses it for the encryption and decryption of the user’s requests.
- HTTPS is actually HTTP working over the layer of netscape’s secure socket layer (SSL). 
- Unlike regular HTTP, HTTPS transmission takes place through a wire via PORT 443 for carrying out interactions between the lower layer TCP/ IP. 
- SSL or secure socket layer makes use of a 40 bit key size for encrypting the RC4 streams algorithm. 
- Therefore an adequate degree of encryption is possible for commercial exchange.

Difference #3:
- HTTPS being so very secure finds its use in shopping/ commercial sites and login pages. 
- HTTPS though being a standard secure protocol transmits the data over world wide web just like HTTP with the only difference being in the form in which data is transmitted i.e., the encrypted form. 
- When you put https:// instead of http:// you are asking the server to establish a secure connection path. 
- The server makes it a point that the secure and non secure connections are kept separately
When the address in the address bar of the web browser that you are using, starts with http://, it simply means that your requests are being communicated over regular un-secure “HTTP” language.
- It is basically the letter ‘S’ that makes all the difference between HTTP and HTTPS. 

Difference #4:
- Most of the requests of the clients are processed via HTTP. The client in turn gets a response from the server on the completion of a request in the form of a web page. 
- In HTTPS the information is highly encrypted which means that no can have a clue of what you are looking for. 
This type of secure communication is commonly prevalent in those areas wheer security is quite mandatory like the following:
  1. E- mails
  2. Banking web sites
  3. Payment gateways and so on.
To get an HTTPS connection, the server requires a public key trusted and signed certificate.



Facebook activity