Subscribe by Email


Showing posts with label Signals. Show all posts
Showing posts with label Signals. Show all posts

Monday, October 14, 2013

What are secret-key and public-key signatures?

- Asymmetric cryptography is often referred to as the public-key cryptography. 
It is a cryptographic algorithm which makes use of two individual keys namely the secret key and the public key. 
- The secret is kept private and the public key is open. 
- Even though these two keys are different, there is some mathematical link between the two. 
- The key which is used for the encryption of the plain text and verification of the digital signature is the public key. 
- So, the private key is one that is used for the decryption of the cipher text in to plain text or for creation of a digital signature. 
- Both these keys are contrast of each other unlike in the symmetric cryptography where the same key serves both the purposes. 
- The public keys are created based up on some mathematical problems for which presently there is no efficient solution such as the following:
Ø  Elliptic curve relationships
Ø  Discrete logarithms
Ø  Integer factorization
- Generating the public and the private key pair is computationally easy for the users. 
- The strength of the public keys lies in the fact that determining the private key from its public key is computationally in feasible or almost impossible. 
Thus, without fearing any compromise with the security, the public key can be published whereas the private key is kept hidden from everyone so as not to reveal it to anyone who does not has authorization for performing the digital signatures or reading the messages. 
- Unlike for the symmetric key algorithms, a secure initial exchange of the secret keys is not required for the public key algorithms. 
- In the process of message authentication, a private key is used for processing a message for producing the digital signature. 
- After doing so, the signature can be verified by anyone by processing the value of the signature using the corresponding public key of the signer. 
- The result is then compared with the message. 
- The unmodified nature of the message is confirmed a success signal. 
- Also, it is presumed that the private key of the signer has been kept hidden from the others. 
- However, in practical applications, the message’s digest or hash is encrypted and used as the signature. 
- The fundamental security components of the cryptosystems, protocols and applications are the public key algorithms.
These systems underpin the following internet standards:
Ø  PGP
Ø  GPG
Ø  TLS or transport layer security


- Secrecy as well as Key distribution is provided by some of the public key algorithms such as the Diffie-Hellman key exchange algorithm while some algorithms like Digital signature algorithm provide the digital signature and some others offer both the things.
- An example of such algorithm is RSA. 
- All these algorithms have been widely accepted. 
- A pair of cryptographic keys (i.e., a public key for encryption and a private key for decryption) is provided to each of the users. 
- Similarly, for digital signatures the pair of keys consists of a private key for signing and a public key for verification. 
- The concept of the private key has been introduced so as to ensure the confidentiality. 
- The digital signatures can be verified by anyone possessing the corresponding public key. 
- With such a confirmation it is confirmed the private key is possessed by the sender. 
- This is also a way to confirm that no tampering has been done to the message. 
- If the message has been tampered, it will introduce changes in the encoded message digest. 
- Mail box having a mail slot and a personal wax seal can be taken as an analogy to public – key encryption and digital signatures respectively. 


Wednesday, September 25, 2013

What is meant by multiplexing?

- Multiplexing or muxing is a very important process in computer networks and the telecommunications. 
Using this process, a number of digital data streams or analog message signals are combined as one signal and then transported over the common medium. 
Multiplexing is used wherever it is required to share a resource that is very expensive. 
- The most common example of multiplexing is of using one wire for several telephone calls. 
- The origin of the multiplexing dates back to 1870s when telegraphy was started.
- Now it is used to a great extent in the field of communications. 
- The telephone carrier multiplexing was developed by George Owen Squire in the field of telephony. 
- The communication channel over which the multiplexed signal might be transmitted might be a physical transmission medium. 
- The high level communication channel’s capacity is divided by multiplexing process in to a number of low level logical channels where for each message or data stream one channel is used. 
- Demultiplexing is the reverse process of multiplexing. 
- This is used for the extraction of the original signals on the reception side.
- A multiplexer or MUX is a device that is used for carrying out the multiplexing process and the demultiplexer or DEMUX is the device that performs demultiplexing. 
- IMUX or inverse multiplexing is another process whose aim is just the opposite of the multiplexing.
- It breaks down a single data stream in to various streams while transferring them at the same time over various communication channels.
- Later, the original stream is recreated.

Types of Multiplexing

Many different types of multiplexing technologies are available today. Each has its own significance:

Ø SDM or space-division multiplexing: 
This technique implies on using different point – to – point wires for individual communication channels. For example, an audio cable of analogue stereo, multi – pair telephone cable, switched star network, mesh network. However typically the wired SDM is not usually considered as multiplexing. In SDM a phased array antenna is formed by multiple antennas. For example MIMO (multiple – input and multiple – output), SIMO (simple – input and multiple – output), MISO (multiple – input and single – output) etc.

Ø  FDM or frequency-division multiplexing: 
This is considered to be an analog process, here the signals are sent in to different frequency ranges over a shared medium. For example, TV and radio broadcasting from satellite stations through the earth’s atmosphere. One cable is given in each house but over this cable many signals can be sent to other subscribers also. For accessing the desired signal, the users require to tune to that particular frequency. WDM or wavelength division multiplexing is a variant of FDM.

Ø TDM or time-division multiplexing: 
Unlike FDM, TDM is a digital technology but very rarely it might be used as an analog technology also. The process involves putting bytes in a sequence for each input stream one by one. this sequencing is done in such a way that the receiver can appropriately  receive them. If this is done quickly, the fact that another logical communication path was served in that circuit time won’t be detected by the receiver.

Ø  CDM or code-division multiplexing: 
In this multiplexing technique, the same frequency spectrum is shared by the several channels at the same time. Also the bandwidth of the spectrum is quite high when compared to the symbol rate or the bit rate. It is implemented in either of the two forms namely direct sequence spread spectrum and frequency hopping.

Some other types of multiplexing techniques which are less prominent are:
Polarization-division multiplexing: Used in optical and radio communications.
Orbital angular momentum multiplexing


Monday, September 23, 2013

What is meant by Quality of Service provided by network layer?

- The QoS or the quality of service is such a parameter that refers to a number of aspects of computer networks, telephony etc. 
- This parameter allows transportation of traffic as per some specific requirements. 
- Technology has advanced so much now computer networks can also be doubled up as the telephone networks for doing audio conversations. 
- The technology even supports the applications which have strict service demands. 
- The ITU defines the quality of service in telephony. 
It covers all the requirements concerning all the connection’s aspects such as the following:
Ø  Service response time
Ø  Loss
Ø  Signal – to – noise ratio
Ø  Cross – talk
Ø  Echo
Ø  Interrupts
Ø  Frequency response
Ø  Loudness levels etc.  

- The GoS (grade of service) requirement is one subset of the QoS and consists of those aspects of the connection that relate to its coverage as well as capacity. 
- For example, outage probability, maximum blocking probability and so on. 
- In the case of the packet switched telecommunication networks and computer networking, the resource reservation mechanisms come under the concept of traffic engineering. 
- QoS can be defined as the ability by virtue of which the different applications, data flows and users can be provided with different priorities. 
- It is important to have QoS guarantees if the capacity of the network is quite insufficient. 
- For example, voice over IP, IP-TV and so on. 
- All these services are sensitive to delays, have fixed bit rates and have limited capacities.
- The protocol or network supporting the QoS might agree up on some traffic contract with the network node’s reserve capacity and the software. 
- However, the quality of service is not supported by the best effort services. 
-Providing high quality communication over such networks provides a alternative to the QoS control mechanisms that are complex. 
- This happens when the capacity is over-provisioned so much that it becomes sufficient for the peak traffic load that has been expected. 
- Now since the network congestion problems have been eliminated, the QoS mechanisms are also not required. 
- It might be sometimes be taken as the level of the service’s quality i.e., the GoS. 
- For example, low bit error probability, low latency, and high bit rate and so on. 
- QoS can also be defined as a metric that reflects up on the experienced quality of the service.
- It is the cumulative effect that can be accepted. 
Certain types of the network traffic require a defined QoS such as the following:
Ø  Streaming media such as IPTV (internet protocol television), audio over Ethernet, audio over IP etc.
Ø  Voice over IP
Ø  Video conferencing
Ø  Telepresence
Ø  iSCSI, FCoE tec. Storage applications
Ø  safety critical applications
Ø  circuit emulation service
Ø  network operations support systems
Ø  industrial control systems
Ø  online games

- All the above mentioned services are examples of the inelastic services and a certain level of latency and bandwidth is required for them to operate properly. - On the other hand, the opposite kind of services such as the elastic services can work with any level of bandwidth and latency. 
- An example of these type of services is the bulk file transfer application based up on TCP.
- A number of factors affect the quality of service in the packet switched networks. 
- These factors can be broadly classified in to two categories namely technical and the human factors. 
The following factors are counted as the human factors:
Ø  reliability
Ø  scalability
Ø  effectiveness
Ø  maintainability
Ø  grade of service and so on.

- ATM (asynchronous transfer mode) or GSM like voice transmissions in the circuit switched networks have QoS in their core protocol. 


Tuesday, September 10, 2013

What are the differences between bridges and repeaters?

Bridges and repeaters are both important devices in the field of telecommunications and computer networking. In this article we discuss about these two and differences between them. 
- The repeaters are deployed at the physical layer whereas one can find bridges at the MAC layer. 
- Thus, we called repeaters as the physical layer device. 
- Similarly, bridge is known as the MAC layer device. 
- Bridge is responsible for storing as well forwarding the data packets in an Ethernet.
- Firstly, it examines the header of the data frame, selects few of them and then forwards them to the destination address mentioned in the frame. 
- Bridge uses the CSMA/CD for accessing a segment whenever the data frame has to be forwarded to it.
- Another characteristic of a bridge is that its operation is transparent. 
- This means that the hosts in the network do not know that the bridge is also present in the network. 
- Bridges learn themselves; they do not have to be configured again and again. 
They can be simply plugged in to the network. 
- Installing a bridge causes formation of LAN segments by breaking a LAN. 
Packets are filtered with the help of bridges. 
- The frames that belong to one LAN segment are not sent to the other segments. 
- This implies separate collision domains are formed. 
The bridge maintains a bridge table consisting of the following entries:
  1. LAN address of the node
  2. Bridge interface
  3. Time stamp
  4. Stale table entries

- Bridges themselves learn that which interface can be used for reaching which host. 
- After receiving a frame, it looks for the location of the sending node and records it.
- It keeps the collision domains isolated from one another thus, giving the maximum throughput. 
- It is capable of connecting a number of nodes and offer limitless geographical coverage. 
- Even different types of Ethernet can be connected through it. 
- Even the repeaters are plug and play devices but they do not provide any traffic isolation. 
- Repeaters are used for the purpose of regenerating the incoming signals as they get attenuated with time and distance. 
- If physical media such as the wifi, Ethernet etc. is being used, the signals can travel only for a limited distance and after that their quality starts degrading. 
The work of the repeaters is to increase the extent of the distance over which the signals can travel till they reach their destination. 
- Repeaters also provide strength to the signals so that their integrity can be maintained. 
- Active hubs are an example of the repeaters and they are often known as the multi-port repeaters. 
- Passive hubs do not serve as repeaters. 
- Another example of the repeaters are the access points in a wifi network. 
- But it is only in repeater mode that they function as repeaters. 
- Regenerating signals using repeaters is a way of overcoming the attenuation which occurs because of the cable loss or the electromagnetic field divergence. 
For long distances, a series of repeaters is often used. 
- Also, the unwanted noise that gets added up with the signal is removed by the repeaters. 
- The repeaters can only perceive and restore the digital signals.
- This is not possible with the analog signals. 
- Signal can be amplified with the help of amplifiers but they have a disadvantage which is that on using the amplifiers, the noise is amplified as well. 
- Digital signals are more prone to dissipation when compared to analog signals since they are completely dependent up on the presence of the voltages. 
- This is why they have to be repeated again and again using repeaters. 


Thursday, August 29, 2013

How can traffic shaping help in congestion management?

- Traffic shaping is an important part of congestion avoidance mechanism which in turn comes under congestion management. 
- If the traffic can be controlled, obviously we would be able to maintain control over the network congestion. 
Congestion avoidance scheme can be divided in to the following two parts:
  1. Feedback mechanism and
  2. The control mechanism
- The feedback mechanism is also known as the network policies and the control mechanism is known as the user policies.
- Of course there are other components also but these two are the most important. 
- While analyzing one component it is simply assumed that the other components are operating at optimum levels. 
- At the end, it has to be verified whether the combined system is working as expected or not under various types of conditions.

Network policy has got the following three algorithms:

1. Congestion Detection: 
- Before information can be sent as the feedback to the network, its load level or the state level must be determined. 
- Generally, there can be n number of possible states of the network. 
- At a given time the network might be in one of these states. 
- Using the congestion detection algorithm, these states can be mapped in to the load levels that are possible. 
- There are two possible load levels namely under-load and over-load. 
- Under-load means below the knee point and overload occurs above knee point. 
- If this function’s k–ary version is taken, it would produce k load levels. 
- There are three criteria based up on which the congestion detection function would work. They are link utilization, queue lengths and processor utilization. 

2. Feedback Filter: 
- After the load level has been determined, it has to be verified that whether or not the state lasts for duration of sufficiently longer time before it is signaled to the users. 
- It is in this condition that the feedback of the state is actually useful. 
- The duration is long enough to be acted up on. 
- On the other hand a state that might change rapidly might create confusion. 
The state passes by the time the users get to know of it. 
- Such states misleading feedback. 
- A low pass filter function serves the purpose of filtering the desirable states. 

3. Feedback Selector: 
- After the state has been determined, this information has to be passed to the users so that they may contribute in cutting down the traffic. 
- The purpose of the feedback selector function is to identify the users to whom the information has to be sent.

User policy has got the following three algorithms: 

1.Signal Filter: 
- The users to which the feedback signals are sent by the network interpret them after accumulating a number of signals. 
- The nature of the network is probabilistic and therefore signals might not be the same. 
- According to some signals the network might be under-loaded and according to some other it might be overloaded. 
- These signals have to be combined to decide the final action. 
- Based up on the percentage, an appropriate weighting function might be applied. 

2. Decision Function: 
- Once the load level of the network is known to the user, it has to be decided whether or not to increase the load.
- There are two parts of this function: the direction is determined by the first one and the amount is decided by the second one. 
- First part is decision function and the second one is increase/ decrease algorithms. 

3. Increase/Decrease Algorithm: 
- Control forms the major part of the control scheme.
- The control measure to be taken is based up on the feedback obtained. 
- It helps in achieving both fairness and efficiency. 


Saturday, August 24, 2013

How can the problem of congestion be controlled?

Networks often get trapped in the situation of what we call network congestion. For avoiding such collapses, congestion avoidance and congestion control techniques are often used by the networks nowadays. 

In this article, we discuss about how we can control the problem of network congestion using these techniques. Few very common techniques are:
  1. Exponential back off (used in CSMA/ CA protocols and Ethernet.)
  2. Window reduction (used in TCP)
  3. Fair queuing (used in devices such as routers)
  4. The implementation of the priority schemes is another way of avoiding the negative effects of this very common problem. Priority schemes let the network transmit the packets having higher priority over the others. This way only the effects of the network congestion can be alleviated for some important transmissions. Priority schemes alone cannot solve this problem.
  5. Another method is the explicit allocation of the resources of the network to certain flows. This is commonly used in CFTXOPs (contention – free transmission opportunities) providing very high speed for LAN (local area networks) over the coaxial cables and phone lines that already exist.
- The main cause of the problem of network congestion is the limited capacity of the network. 
- This is to say that the network has limited. 
- The resources also include the link throughput and the router processing time. 
- Congestion control is concerned with curbing the entry of the traffic in to the telecommunications network so that the problem of congestive collapse can be avoided. 
- The over-subscription of the link capabilities is avoided and steps are taken to reduce the resources. 
- One such step is reducing the packet transmission rate. 
- Even though if it sounds similar to flow control, it is not the same thing. 
- Frank Kelly is known as the pioneer of the theory of congestion control. 
- For describing the way in which the network wide rate allocation can be optimized by the individuals by controlling their rates, he used two theories namely the convex optimization theory and the micro economics theory. 

Some optimal rate allocation methods are:
Ø  Max – min fair allocation
Ø  Kelly’s proportional fair allocation

Ways to Classify Congestion Control Algorithm

There are 4 major ways for classifying the congestion control algorithms:
  1. Amount as well as type of feedback: This classification involves judging the algorithm on the basis of multi-bit or single bit explicit signals, delay, loss and so on.
  2. The performance aspect taken for improvement: Includes variable rate links, short flow advantage, fairness, links that can cause loss etc.
  3. Incremental deployability: Modification is the need of sender only, modification is required by receiver and the sender, modification is needed only by the router, and modification is required by all three i.e., the sender, receiver and the router.
  4. Fairness criterion being used: It includes minimum potential delay, max – min, proportional and so on.
Two major components are required for preventing network congestive collapse:
  1. End to end flow control mechanism: This mechanism has been designed such that it can respond well to the congestive collapse and thus behave accordingly.
  2. Mechanism in routers: This mechanism is used for dropping or reordering packets under the condition of overload.

- For repeating the dropped information correct behavior of the end point is required. 
- This indeed slows down the information transmission rate. 
- If all the end points exhibit this kind of behavior, the congestion would be lifted from the network. 
- Also, all the end points would be able to share the available bandwidth fairly. - Slow start is another strategy using which it can be ensured that the router is not overwhelmed by the new connections before congestion can be detected. 


Thursday, July 11, 2013

What properties are common between WDMA and GSM channel access protocols?

About Wavelength Division Multiplexing Access(WDMA)
- Wavelength division multiplexing access or WDMA is a concept of fiber optics communications that is all about multiplexing the number of optical carrier signals in to one optical fiber. 
- This is done by using a number of different wavelengths or colors of the LASER lights. 
- This technique has made it possible to make bidirectional communications through one fiber strand and multiplication of capacity.
- The frequency division multiplexing is applied to radio carrier whereas this WDMA is used in the optical fibers. 
- An inverse relationship ties the frequency and wavelength together since the same concept is described by the two terms. 
- WDMA is classified under channel access methods and is based on wavelength division multiplexing. 
- A system using the WDM consists of a multiplexer at the transmitting end for joining the signals together. 
- Similarly, a de-multiplexer is installed at the receiver’s end.
- The de-multiplexer is used for splitting the joined signal apart.
- It is possible to make a device using the right kind of fiber that can do both multiplexing and de-multiplexing simultaneously as well as an optical add – drop multiplexer.

About Global System for Mobile Communications(GSM)
- Global system for mobile communications is an ETSI (European telecommunications standards institute) developed standard set for describing the protocols for 2G i.e., second generation mobile networks. 
- This standard set now rules the market with around 80 percent of the total market share. 
- GSM describes a digital network operating on circuit switching and that has been optimized for full duplex voice telephony. 
- GSM means a cellular network. 
- For connecting to this network, the cell phone searches for other cells in its surrounding. 
- GSM network offers 5 types of different cell sizes namely:
  1. Macro
  2. Micro
  3. Pico
  4. Femto
  5. Umbrella cells
- The extent of the area covered by each of the cells varies depending on the environment in which they are being implemented. 
- The base antenna is installed in the macro cells as a mast.
- In the micro cells, the height of the antenna is at the average level of the roof top. 
- Pico cells have a coverage diameter of only few meters.
- Femto cells find use in the small business and residential environments. 
- They are used for connecting to the ISP’s network. 
- Shadowed regions of the cells which are small are covered by the umbrella cells.

Common Properties and Differences between WDMS and GSM

Usually there are a number of differences between GSM and WDMA but they share some common properties also. 
- The base station of the GSM system and that of the WDMA both are connected to the core network of the GSM system in order to facilitate radio connectivity in the handsets. 
- Therefore, the same core network is shared by both the technologies. 
Principles of the cellular radio system form the basis for the two technologies. 
There exists a correspondence between the WDMA radio network controller i.e., RNC and the GSM base station controller or BSC. 
- The RBS (radio base station) of GSM system corresponds to the RBS of the WDMA. 
- The basis for developing the lu – interface of WDMA was provided by the A-interface of the GSM technology. 
- The only difference is of the new additional services that are offered by the WDMA. 
- GSM makes use of time division multiplexing along with the radio functionality for the management of the time slots. 
- On the other hand, WDMA makes use of the code division multiplexing for the same. 
- This implies that the control as well as the hardware functions of both are different


Facebook activity