Subscribe by Email


Showing posts with label Algorithm. Show all posts
Showing posts with label Algorithm. Show all posts

Monday, October 14, 2013

What are secret-key and public-key signatures?

- Asymmetric cryptography is often referred to as the public-key cryptography. 
It is a cryptographic algorithm which makes use of two individual keys namely the secret key and the public key. 
- The secret is kept private and the public key is open. 
- Even though these two keys are different, there is some mathematical link between the two. 
- The key which is used for the encryption of the plain text and verification of the digital signature is the public key. 
- So, the private key is one that is used for the decryption of the cipher text in to plain text or for creation of a digital signature. 
- Both these keys are contrast of each other unlike in the symmetric cryptography where the same key serves both the purposes. 
- The public keys are created based up on some mathematical problems for which presently there is no efficient solution such as the following:
Ø  Elliptic curve relationships
Ø  Discrete logarithms
Ø  Integer factorization
- Generating the public and the private key pair is computationally easy for the users. 
- The strength of the public keys lies in the fact that determining the private key from its public key is computationally in feasible or almost impossible. 
Thus, without fearing any compromise with the security, the public key can be published whereas the private key is kept hidden from everyone so as not to reveal it to anyone who does not has authorization for performing the digital signatures or reading the messages. 
- Unlike for the symmetric key algorithms, a secure initial exchange of the secret keys is not required for the public key algorithms. 
- In the process of message authentication, a private key is used for processing a message for producing the digital signature. 
- After doing so, the signature can be verified by anyone by processing the value of the signature using the corresponding public key of the signer. 
- The result is then compared with the message. 
- The unmodified nature of the message is confirmed a success signal. 
- Also, it is presumed that the private key of the signer has been kept hidden from the others. 
- However, in practical applications, the message’s digest or hash is encrypted and used as the signature. 
- The fundamental security components of the cryptosystems, protocols and applications are the public key algorithms.
These systems underpin the following internet standards:
Ø  PGP
Ø  GPG
Ø  TLS or transport layer security


- Secrecy as well as Key distribution is provided by some of the public key algorithms such as the Diffie-Hellman key exchange algorithm while some algorithms like Digital signature algorithm provide the digital signature and some others offer both the things.
- An example of such algorithm is RSA. 
- All these algorithms have been widely accepted. 
- A pair of cryptographic keys (i.e., a public key for encryption and a private key for decryption) is provided to each of the users. 
- Similarly, for digital signatures the pair of keys consists of a private key for signing and a public key for verification. 
- The concept of the private key has been introduced so as to ensure the confidentiality. 
- The digital signatures can be verified by anyone possessing the corresponding public key. 
- With such a confirmation it is confirmed the private key is possessed by the sender. 
- This is also a way to confirm that no tampering has been done to the message. 
- If the message has been tampered, it will introduce changes in the encoded message digest. 
- Mail box having a mail slot and a personal wax seal can be taken as an analogy to public – key encryption and digital signatures respectively. 


Friday, August 16, 2013

What is meant by flow based routing?

- The routing algorithm that considers the flow in the network is known as flow based routing. 
- It takes into consideration the amount of traffic flowing in the network before making a decision regarding the outgoing link over which the packet has to be sent. 
- The ability to characterize the traffic flow’s nature with respect to time is the key to the successful implementation of the flow based routing. 
- For any given line, if we know what is its average flow or capacity we can very well compute the mean packet delay of the line using the queuing theory. - This is the basic idea behind the implementation of this algorithm. 
- This idea reduces the size of the problem i.e., only the minimum average delay has to be calculated for the sub net and nothing else. 
- Thus, the flow based routing considers the load and topology of the network while other routing algorithms do not. 
- In few networks, the mean data flow existing between two nodes might be predictable as well as relatively stable. 
- There occur such conditions under which the average traffic between the two points is known. 
- In such conditions the mathematical analyzation of the flow is possible. 
- This calculation can be used in the optimization of the routing protocol. 
- The flow weighted average can be straightforward calculated which in turn can be used for the calculation of the mean packet delay of the entire sub-net. 

The flow based routing algorithm requires the following things in advance:
Ø  Topology of the subnet
Ø  Traffic matrix
Ø  Capacity matrix
Ø  A routing algorithm
- Information flow based routing algorithms are commonly used in the wireless sensor networks. 
- These days, the measure of information is being used a criterion for the analyzation of the performance of the flow based routing algorithms. 
- One research has put forward an argument stating that since the sensor network is driven by the objective of the estimation of a 2D random field, the information flow must be maximized over the entire field and the sensor’s lifetime. 
In response to this algorithm two types of flow based routing algorithm have been designed namely:
  1. Maximum information routing (MIR) and
  2. Conditional maximum information routing (CMIR)
- Both of these algorithms have proved to be quite significant when compared to the exiting algorithm – maximum residual energy path or MREP.

About MREP Algorithm

 
- This proves to be quite effective in conservation of the energy. 
- The battery energy which is limited is taken as the most important resource. - For the maximization of the lifetime, the energy consumption has to be balanced throughout the nodes. 
- This should be done in proportion to the resource reserves. 
- This is better than routing for the minimization of the absolute consumed power.

About MIR Algorithm

 
- The ideology behind the MIR algorithm is that there is inequality between the nodes. 
- For example, two very close nodes might not provide twice as much information provided by a lonely node. 
- Therefore, the nodes that provide more information are only given preference. 
- An additional penalty according to the node’s contribution is added to the node for achieving the above mentioned preference. 
- Dijkstra’s algorithm is used for the computation of the shortest path. 
- This helps in sending the data to the sensor as per both the information of the origin and the power consumed.


About CMIR Algorithm

- This one is a hybrid algorithm and makes use of MIR to some extent and then uses MREP algorithm for the rest of the cycle. 
- This hybrid version is better than the above two standalone algorithms since it runs better. 


Saturday, August 10, 2013

Shortest Path Routing - a type of routing algorithm

- The usage of the re-configurable logic has been increasing day by day both in scope as well as number. 
- Re-configurable computing combines both the hardware speed and the flexibility of the software. 
- This is the result of the combination of the highspeed computing and re-configurability.
- Tough requirements are posed up on the routing in a network by the increased QoS i.e., the quality of service. 
- This increase in the complexity of the computational capabilities bears an exponential relation with the increased QoS. 
- However, additional computational resources are needed for achieving a network performance level that is acceptable. 
- Re-configurable computing offers a promising solution to the issues of the computations in the routing process.
There are 3 major aspects of the shortest path routing as mentioned below:
Ø Path selection: This involves the various algorithms such as the Dijkstra’s and bellman – ford algorithms and shortest path and minimum – hop routing.
Ø Topology change: Changes in the topology are detected using the beacons.
Ø  Routing protocols: This involves routing protocols such as the link state routing protocols and distance vector protocols.
- Forwarding and routing are two different things. 
- In forwarding, the data packet is directed towards an outgoing link and an individual router is used that also maintains a forwarding table.
- Routing computes the paths that have to be followed by the packets. 
- Routers exchange the path information between themselves and the forwarding table is created by each and every router in the chain.

Routing is important for the following three main reasons:
Ø  End-to-end performance: The user performance is affected by the path quality, throughput, packet loss and delay in propagation.
Ø  Use of the network resources: The traffic has to be balanced between the several links and routers. The traffic is directed towards the links that are lightly loaded for avoiding the congestion.
Ø  Transient disruptions during changes: These disruptions include the load balancing problems, maintenance, failures etc. the packet loss as well as the delay has to be limited while the changes take effect.


- Shortest path routing is based up on a path selection model that gives more preference to the destination. 
- This type of routing is insensitive to load as in it involves the static link weights.
- Here, either the sum of the link weights or the minimum hope is considered. 
In a shortest path problem, the link costs are given for a network topology. 
- For example, C(x,y) denotes the cost of the node x to node y. 
- If the two nodes x and y are not adjacent to each other the cost is taken to be infinity. 
- The least cost paths linking all the nodes are computed from a node taken as the source. 
- Dijkstra’s shortest path algorithm is one of the algorithms used in the shortest path routing. 
- A central role is played by the problems involving finding the shortest paths in the designing and the analyzation of the networks.
- A majority of the routing problems can be taken as the shortest path problems and solved if each link in the network has appropriate cost assigned to it. 
- This cost even reflects the bandwidth as well as the bit error ratio if required. - A number of algorithms are available for computing the shortest path.
- But these algorithms are applicable only if a single non – negative additive metric characterizes every edge in the network.
- Out of these algorithms, the Dijkstra’s algorithm is the most famous one. 
- This algorithm find its use in the OSPF (open shortest path first) routing procedure of the internet. 
- In this algorithm the number of operations carried out are proportional to the number of nodes in the network and the iteration is carried for n-1 times. 


Tuesday, August 6, 2013

What is meant by an optimal route?

- For selecting a path or route, a routing metric has to be applied to a number of routes so as to select the best out of them. 
- This best route is called the optimal route with respect to the routing metric used. 
- This routing metric is computed with the help of the routing algorithms in computer networking.
- It consists of information such as network delay, hop count, network delay, load, MTU, path cost, communication cost, reliability and so on.
- Only the best or the optimal routes are stored in the routing tables that reside in the memory of the routers. 
- The other information is stored in either the topological or the link state databases. 
- There are many types of routing protocol and each of them has a routing metric specific to it. 
- Some external heuristic is required to be used by the multi-protocol routers for selecting between the routes determined using various routing protocols. 
For example, the administrative distance is the value that is attributed to all the routes in Cisco routers. 
- Here, smaller distances mean that the protocol is a reliable one. 
- Host specific routes to a certain device can be set up by the local network admin. 
- This will offer more control over the usage of the network along with better overall security and permission for testing. 
- This advantage comes handy especially when it is needed to debug the routing tables and the connections. 

In this article we discuss about the optimal routes. 
- With the growing popularity of the IP networks as the mission critical tools for business, the need for methods and techniques using which the network’s routing posture can be monitored is increasing.
- Many routing issues or even incorrect routing can lead to undesirable effects on the network such as downtime, flapping or performance degradation. 
- Route analytic are the techniques and tools that are used for monitoring the routing in a network. 

The performance of the network is measured using the following 2 factors:
  1. Throughput or the Quantity of service: This includes the amount of data that is transmitted and time it takes to transfer.
  2. Average packet delay or Quality of service: This includes the time taken by a packet to arrive at its destination and the response of the system to the commands entered by the user.
- There is always a constant battle between the fairness and optimality or we can say between quantity of service and quality of service. 
- For optimizing the throughput, the paths existing between the nodes have to be saturated and the response time from source point to destination point must be noticed. 

For finding the optimal routes, we have two types of algorithms namely:
  1. Adaptive Algorithms: These algorithms are meant for the networks in which the routes change in a dynamic manner. Here the information regarding the route to be followed is obtained at the run time itself from adjacent as well as the all other routers. The routes change whenever there is a change in the load, change in the topology and every delta T seconds.
  2. Non – adaptive algorithms: These algorithms the same routes cannot be followed every time. Therefore the measurements that were made for the previous condition cannot be used for the current condition. The routes thus obtained are called static routes and are computed at the boot time.

Finding optimal routes requires following the principle of optimality according to which the optimal path between an intermediate router and the destination router lies on the same route from the source to the destination route. 


Monday, August 5, 2013

What is optimality principle?

A network consists of nodes which require communicating with other on various grounds. This communication is established via communication channels that exist between them. The communication involves data transfers. In a network a node may or may not have a link with every other node in the network. 
Applications that require communicating over a network include:
1. Telecommunication network applications such as POTS/ PSTN, local area networks (LANs), internet, mobile phone networks and so on.
2. Distributed system applications
3. Parallel system applications

- As we mentioned above, each and every node might not be linked with every other nodes since for doing so a lot of wires and cables are required which will the whole network more complicated. 
- Therefore, we bring in the concept of the intermediate nodes. 
- The data transmitted by the source node is forwarded to the destination by these intermediate nodes. 
Now the problem that arises is which path or route will be the best to use i.e., the path with the least cost. 
- This is determined using the routing process. 
- The best path thus obtained is called the optimal route. 
- Today, we have a number of algorithms available for determining the optimal path. 

These algorithms have been classified in to two major types:
  1. Non – adaptive or static algorithms
  2. Adaptive or dynamic algorithms

Concept of Optimality Principle

- This is the principle followed while determining the optimal router between the two routes. 
The general statement of the principle of optimality is stated below:
“An optimal policy has the property that whatever the initial state and initial decision are, the remaining decision must constitute an optimal policy with regard to the state resulting from the first decision.”

- This means if P is an optimal state that results in another state say Q, and then the portion of the original from that state to this state i.e., from P to Q must be optimum. 
- This only means the optimality of the part of the optimal policy is preserved. - The initial state and the final state are the most important parts of the optimum. 
- Consider an example, suppose we have problem with 3 inputs and 26 states. - Here, the state is associated with the optimum and the total cost is associated with the optimum policy.
- If brute force method is used for 3 inputs and 100 stages we have the total number of computations as 3100
- That means for solving this problem, a super computer is required.
- Therefore, the approach used for solving this problem is a parallel processing approach. 
- Here, for the each state the least step is computed and stored during the programming. 
- This reduces the number of possibilities and hence reducing the amount of computation.
- The problems become complex if the initial and the final states are undefined. - It is necessary for the problem to follow the principle of optimality in order to use the dynamic programming. 
- This implies that whatever the state may be, the decisions that follow must be optimal in regard with the state obtained from the previous decision. 
- This property is found in combinatorial problems but since they use a lot of time and memory, this method is inefficient for them. 
- These problems can be solved efficiently if some sort of best first search and pruning technique is applied.

- In regard to the routing in networks, it follows from the optimality principle if a router B lies between router A and C which lie on an optimal path, then the path between the router B and C is also an optimal path and lies on the same path. 
- Sink tree is formed as a result of all optimal routes which is the ultimate goal of all the routing algorithms.


Wednesday, June 19, 2013

Explain the Priority CPU scheduling algorithm

A number of scheduling algorithms are available today and all are appropriate for some different kinds of scheduling environments. In this article we give a brief explanation about the ‘priority CPU scheduling algorithm’. 

For those who are not familiar with this scheduling algorithm, a special case of the priority algorithm is the shortest job first scheduling algorithm (SJF). 

- This algorithm involves associating a priority with each and every thread or process. 
- Out of all the processes, the one with the highest priority is chosen and given to the processor for execution. 
- Thus, it is decided by the priority that which process has to be executed first. 
There are cases when the two or more processes might have the same priority. 
- In such case FCFS (first come first served) scheduling algorithm is applied. 
The process first in the queue is then executed first. 
- The SJF is essentially a modification of the priority algorithm. 
- Here, the priority of a process (indicated by p) is simply taken as the inverse of the following CPU burst as predicted. 
- This implies if a process is having a large CPU burst, then its priority will be low accordingly and similarly if the CPU burst is small, the priority will be high. 
Numbers in some fixed range are used for indicating the priorities such as from 0 to 4095 or from 0 to 7 etc. 
- One thing to be noted is that there has been no general agreement up on whether the number 0 indicates lowest priority or highest priority.
- In some systems the lower priorities are indicated by the low numbers while in some systems low numbers mean higher priorities. 
- The latter case i.e., using low numbers for representing high priorities is more common.
For example, consider the 5 processes P1, P2, P3, P4 and P5 having CPU burst as 10, 1, 2, 1, 5 respectively and priority also respectively as 3, 1, 4, 5, 2. Using the priority scheduling algorithms, the processes will be executed in the following order:
P2, P5, P1, P3, P4

There are two ways of defining the priorities i.e., either externally or internally. This gives two types of priorities:

  1. Internally defined priorities: These priorities make use of some quantities that can be measured for computing a process’s priority. These quantities include memory requirements, time limits, ration of the I/O burst and CPU burst, number of files and so on.
  2. Externally defined priorities: These priorities are defined by some criteria that are external to the operating system. Such factors include political factors, department leading the work; importance of the process, amount of money paid and so on.
The priority scheduling can be itself divided in to two types namely non – preemptive or preemptive. The priority of the process waiting in the ready queue is compared with that of the executing process.

Ø  Preemptive priority scheduling: Here the CPU is preempted if the waiting process has a priority higher than that of the currently executing process.
Ø  Non – preemptive priority scheduling: Here the new process will be let waiting in the ready queue till the execution of the current process is complete.


Starvation or the indefinite blocking presents a major problem in priority scheduling. A process will be considered blocked if it is ready for execution but has to wait for CPU. It is very likely that the low priority processes will be left waiting indefinitely for CPU. In a system that is heavily loaded most of the time, if the number of high priority processes is large, the low priority processes will be prevented from getting processor. 


Monday, June 17, 2013

Explain the Round Robin CPU scheduling algorithm

There are number of CPU scheduling algorithms, all having different properties thus making them appropriate for different conditions. 
- Round robin scheduling algorithm or RR is commonly used in the time sharing systems. 
- This is the most appropriate scheduling algorithm for time sharing operating systems. 
- This algorithm shares many similarities with the FCFS scheduling algorithm but there is an additional feature to it. 
- This feature is preemption in the context switch occurring between two processes. 
- In this algorithm a small unit of time is defined and is termed as the time slice or the time quantum. 
- These time slices or quantum range from 10ms to 100 ms.
- The ready queue in round robin scheduling is implemented in a circular fashion. 

How to implement Round Robin CPU scheduling algorithm

Now we shall see about the implementation of the round robin scheduling:
  1. The ready queue is maintained as the FIFO (first in first out) queue of the processes.
  2. Addition of new processes is made at the rear end of the ready queue and selection of the process for execution by the processor is made at the front end.
  3. The process first in the ready queue is thus picked by the CPU scheduler. A timer is set that will interrupt the processor when the time slice elapses. When this happens the process will be dispatched.
  4. In some cases the CPU burst of some processes may be less than the size of the time slice. If this is the case, the process will be voluntarily released by the CPU. The scheduler will then jump to the process next in the ready queue and fetch it for execution.
  5. While in other cases the CPU burst for some processes might be higher than the size of the time slice. In this case the timer set will send an interrupt to the processor, thus dispatching the process and putting it at the rear end of the ready queue. The scheduler will then jump to the next process in the queue.
The waiting time in round robin scheduling algorithm has been observed to be quite long at an average rate. 
- In this algorithm, not more than one time slice can be allocated to any process under any conditions in a row. 
- However, there is an exception to this if there is only one process to be executed.
- If the CPU burst is exceeded by the process, the process is put back at the tail of the queue after preemption.
- Thus, we can call this algorithm as a preemptive algorithm also. 
- The size of the time quantum greatly affects the performance of the round robin algorithm.
- If the size of the time quantum is kept too large then it resembles the FCFS algorithm. 
- On the other hand if the quantum is of too small size, then this RR approach is called the processor sharing approach. 
- An illusion is created in which it seems every process has its own processor that runs at the fraction of the speed of the actual processor. 
- Further, the effect of the context switching up on the performance of the RR scheduling algorithm.
- A certain amount of time is utilized in switching from one process to another. 
In this time the registers and the memory maps are loaded, a number of lists and tables are updated; memory cache is flushed and reloaded etc.
- Lesser the size of the time quantum, context switching will occur more number of times. 


Thursday, May 2, 2013

What is a CPU Scheduler?


Scheduling is a very important concept when it comes to the multi-tasking operating systems. 
- It is the method via which the data flows, threads and processes are provided access to the shared resources of the computer system. 
- These resources include communications bandwidth, processor time, and memory and so on. 
- Scheduling is important as it helps in striking a balance the system processor and its resources effectively. 
- It helps in achieving the target QoS or quality of service. 
But what gave rise to scheduling? 
- Almost all the modern systems require to carry out multiple tasks i.e., multi-tasking and multiplexing as well which require a scheduling algorithm. 
Multiplexing means transmission of the multiple data flows at the same time. - There are some other things also with which the scheduler is concerned. They are:
  1. Throughput: It is the ratio of total number of processes executed to a given amount of time.
  2. Latency: This factor can be sub – divided in to two namely response time and the turnaround time. Response time is the time taken from the submission of the process till its output is produced by the processor. The latter i.e., the turnaround time is the time period elapsed between the process submission and its completion.
  3. Waiting/ fairness time: This is the equal CPU time given to each process or we can say that the time is allocated as per the priority of the processes. The time for which the processes wait in the ready queue is also counted in this.
- But in practical, conflicts may arise between these goals such as in case of latency versus throughput. 
- If such a case occurs, a suitable compromise has to be implemented by the scheduler. 
- The needs and the objectives of the user are used for deciding to who (of the above concerns) the preference is to be given. 
- In robotics or embedded systems i.e., in the real time environments, it becomes a duty of the scheduler for ensuring that all the processes meet their deadlines. 
- This is important for maintaining the stability of the system. 
- The mobile devices are given the scheduled tasks which are then managed by an administrative back end.

Types of CPU Schedulers

There are many types of CPU schedulers as discussed below:
1. Long-term Schedulers: 
- These schedulers facilitate the long term scheduling and are also known as the high level schedulers and the admission schedulers. 
- It is up to them to determine which processes and jobs are to be sent to the ready queue. 
- When the CPU makes an attempt for executing a program, the long term scheduler has the right to decide whether this program will be admitted to the currently executing set of processes. 
- Thus, it is dictated by this scheduler what processes are to be run and the extent of the concurrency has to be there.
- It also decides what amounts of processes have to be concurrently executed. 
It also decides the handling of the split between the CPU intensive and I/O processes.  

2. Medium-term Schedulers: 
- The processes are temporarily removed from the main memory and placed up on the secondary memory by this scheduler. 
- This process is called “swapping in” and “swapping out”. 
- Usually this scheduler swaps out the following processes:
a)   processes that have been inactive since some time
b)   the processes that has raised a frequent page faulting
c)   processes having a low priority
d) processes that take up large memory chunks for releasing the main memory to other processes
- The scheduler later swaps in these processes whenever sufficient memory is available and if the processes are unblocked and not in waiting state.

3. Short-term Schedulers: 
It takes decision regarding the processes to be executed after clock interrupt. 


Facebook activity