Subscribe by Email


Showing posts with label Time. Show all posts
Showing posts with label Time. Show all posts

Friday, September 27, 2013

What are the parameters of QoS - Quality of Service?

With the arrival of the new technologies, applications and services in the field of networking, the competition is rising rapidly. Each of these technologies, services and applications are developed with an aim of delivering QoS (quality of service) that is either better with the legacy equipment or better than that. The network operators and the service providers follow from trusted brands. Maintenance of these brands is of critical importance to the business of these providers and operators. The biggest challenge here is to put the technology to work in such a way that all the expectations of the customers for the availability, reliability and quality are met and at the same time the flexibility for quick adaptation of the new techniques is offered to the network operators. 

What is Quality of Service?

- The quality of service is defined by its certain parameters which play a key role in the acceptance of the new technologies. 
- The organization working on several specifications of QoS is ETSI.
- The organization has been actively participating in the organization of the inter-operability events regarding the speech quality.
- The importance of the QoS parameters has been increasing ever since the increasing inter-connectivity of the networks and interaction between many service providers and network operators for delivering communication services.
- It is the quality of service that grants you the ability for the making parameters specifications based up on multiple queues in order to shoot up the performance as well as the throughput of wireless traffic as in VoIP (voice over internet), streaming media including audio and video of different types. 
- This is also done for usual IP over the access points.
- Configuration of the quality of service on these access points involves setting many parameters on the queues that are already there for various types of wireless traffic. 
- The minimum as well as the maximum wait times are also specified for the transmission. 
- This is done through the contention windows. 
- The flow of the traffic between the access point and the client station is affected by the EDCA (AP enhanced distributed channel access) parameters. 
The traffic flow from client to the access point is controlled by the station enhanced distribution channel access parameters. 

Below we mention some parameters:
Ø  QoS preset: The options listed by the QoS are WFA defaults, optimized for voice, custom and WFA defaults.
Ø  Queue: For different types of data transmissions between AP – to – client station, different queues are defined:
- Voice (data 0): Queue with minimum delay and high priority. Data which is time sensitive such as the streaming media and the VoIP are automatically put in this queue.
- Video (data 1): Queue with minimum delay and high priority. Video data which is time sensitive is put in to this queue automatically.
- Best effort (data 2): Queue with medium delay and throughput and medium priority. This queue holds all the traditional IP data. 
- Background (data 3): Queue with high throughput and lowest priority. Data which is bulky, requires high throughput and is not time sensitive such as the FTP data is queued up here.

Ø AIFS (inter-frame space): This puts a limit on the waiting time of the data frames. The measurement of this time is taken in terms of the slots. The valid values lie in the range of 1 to 255.
Ø Minimum contention window (cwMin): This QoS parameter is supplied as input to the algorithm for determining the random back off wait time for re-transmission.
Ø cwMax
Ø maximum burst
Ø wi – fi multimedia
Ø TXOP limit
Ø Bandwidth
Ø Variation in delay
Ø Synchronization
Ø Cell error ratio
Ø Cell loss ratio



Sunday, July 21, 2013

Comparison between Virtual Circuit and Datagram subnets

Difference #1:
- In virtual circuits the packets are allowed to contain in them the circuit number rather than storing the full address of the destination. 
- This reduces the requirement for a much larger memory and bandwidth. 
- This also makes it cheaper in cost. 
- On the other hand, the data-grams have to contain the full destination address rather than a single circuit number.
- This causes a significant overhead in the data-gram sub nets. 
- Also, this leads to wastage of the bandwidth. 
- All this implies that the data-gram sub nets are more costly when compared to the virtual circuits. 

Difference #2:
- A set up phase is required for the virtual circuits. 
- For establishing this phase a lot of resources are required along with a lot of time. 
- Data-gram sub net in contrast does not require establishment of set up phase. 
- Hence, there is no requirement of resources.

Difference #3:
- In virtual circuits, for indexing purpose the circuit numbers are used by the router. 
- These numbers are stored in a table and are used for finding out the destination of the packet. 
- This procedure is quite simple when compared with the one used by the data-gram sub nets. 
- The procedure used in data-gram sub nets for determining the destination of the packet is quite complex. 

Difference #4:
- Virtual circuits allow for reserving the resources in advance on the establishment of the resources.
- This has a great advantage which is that the congestion is avoided in the sub net. 
- However, in the data-gram sub nets, it is quite difficult to avoid congestion. 

Difference #5:
- If a crash occurs in a router, then it will lose its memory. 
- Even if it backs up after sometime, all the virtual circuits that pass via it must be aborted. 
- This is not a major problem in the case of the data-gram sub nets. 
- Here, if the router crashes, the only packets that will have to suffer will be the ones that were queued for that router at that instant of time. 

Difference #6:
- The virtual circuits can vanish as a result of the loss or fault on the current communication line.
- In data-gram sub nets, it is comparatively easy to compensate for the fault or loss on the communication line. 

Difference #7:
- In virtual circuits there is one more cause for the traffic congestion. 
- This cause in the use of the fixed routes for the transmission of the data packets throughout the network. 
- This also leads to the problem of unbalanced traffic. 
- In data gram sub nets the routers are given the responsibility of balancing the traffic over the entire traffic.
- This has been made possible because it is allowed to change the routers halfway between the connections. 

Difference #8:
- Virtual circuits are one way of implementing the connection-oriented services. 
- For various types of data gram sub nets, a number of protocols are defined by the internet protocol. 
- Internet protocol provides the data-gram service at the internet layer. 
- In contrast with the virtual circuits, data gram sub nets are connection-less service. 
- It is the best effort message delivery service but at the same time is very unreliable. 
- There are a number of high level protocols such as TCP that are dependent up on the data gram service of the internet protocol.
- This calls for additional functionality. 
- The data gram service of IP is even used by the UDP. 
- The fragments of a data gram might be referred to as the data packets. 
- The IP and UDP both provide unreliable services and this is why both of them are termed as data grams. 
- The fragments of TCP are referred to as TCP fragments to distinguish it from data-grams. 


Wednesday, July 3, 2013

What are five key assumptions in dynamic channel allocation?

Putting the available bandwidth in operation of the cellular telephone system to efficient use is an important problem to be considered for providing good service to the largest number of customers possible. The problem has gained a critical status owing to the rapid growth of the cellular telephones users. 

- A communication channel is nothing but a band of frequencies which a number of users can use simultaneously if they are residing far apart from each other. 
- There is a minimum distance at which no interference occurs between the users and it is known as the channel reuse constraint. 
- A cellular telephone system divides the service area in to a number of regions commonly known as the cells. 
- Each of the cells has its own base station for handling the calls concerned with that cell. 
- The bandwidth of the communication channel is partitioned in to many channels permanently. 
- The cells are then allocated these channels in such a way that the channel reuse constraint is not violated by the calls. 
- There are a number of ways for allocating the channels. 
- Few of them are better than the others when it comes to reliably making channels available to all the cells. 

Few examples of channel allocation methods are:
  1. Fixed assignment method
  2. Dynamic allocation method
  3. Reinforcement learning method
About Dynamic Method Allocation
- One type of dynamic method allocation is the BDCL or the borrowing with directional channel locking. 
- Out of all the above mentioned channel allocation methods, the dynamic allocation is considered the best one according to some studies conducted. 
- It is somewhat of the heuristic kind. 
- In dynamic allocation, the channels are allocated in the same way as in the fixed assignment method but it permits borrowing channels from the other cells whenever required. 
- It then arranges those channels in a specific order in each of the cells and this ordering is used in determining the channels for borrowing and reassigning the calls dynamically within the cells.
- There are static allocation techniques also but those don’t seem to work as well as the dynamic allocation techniques. 

In dynamic channel allocation 5 assumptions are always made which we have discussed below:

Station model: 
- There are N independent stations in the model and one frame is generated by each of the stations one at a time. 
- It is blocked until the successful transmission of the previous frame. 
- This means a station cannot queue multiple frames for transmission. 
- For example, a transmission gap of 100 bits is required during the transmission of the consecutive frames.

Single channel assumption:  
- The same medium is shared by all the stations. 
- Through it all the stations can receive and transmit.

Collision assumption: 
- A collision occurs whenever at the same time two frames are transmitted. 
The two frames that collide have to be re-transmitted.

Transmission model: 
- There are 2 types namely, the continuous time model and the slotted time model. 
- In the former type transmission can be started at any given time. 
- In the latter model, transmission starts with a time slot.

Carrier sense: 
- It can also be classified in to 2 categories namely carrier sense and no carrier sense. 
- Stations can know if a channel is occupied prior to using it. This is called carrier sense.
- In no carrier sense, the stations cannot know whether the channel is occupied or not before transmission.

- Also, it gets difficult for the dynamic allocation method for setting up the favorable usage patterns as the calls start saturating the system. 


Monday, June 17, 2013

Explain the Round Robin CPU scheduling algorithm

There are number of CPU scheduling algorithms, all having different properties thus making them appropriate for different conditions. 
- Round robin scheduling algorithm or RR is commonly used in the time sharing systems. 
- This is the most appropriate scheduling algorithm for time sharing operating systems. 
- This algorithm shares many similarities with the FCFS scheduling algorithm but there is an additional feature to it. 
- This feature is preemption in the context switch occurring between two processes. 
- In this algorithm a small unit of time is defined and is termed as the time slice or the time quantum. 
- These time slices or quantum range from 10ms to 100 ms.
- The ready queue in round robin scheduling is implemented in a circular fashion. 

How to implement Round Robin CPU scheduling algorithm

Now we shall see about the implementation of the round robin scheduling:
  1. The ready queue is maintained as the FIFO (first in first out) queue of the processes.
  2. Addition of new processes is made at the rear end of the ready queue and selection of the process for execution by the processor is made at the front end.
  3. The process first in the ready queue is thus picked by the CPU scheduler. A timer is set that will interrupt the processor when the time slice elapses. When this happens the process will be dispatched.
  4. In some cases the CPU burst of some processes may be less than the size of the time slice. If this is the case, the process will be voluntarily released by the CPU. The scheduler will then jump to the process next in the ready queue and fetch it for execution.
  5. While in other cases the CPU burst for some processes might be higher than the size of the time slice. In this case the timer set will send an interrupt to the processor, thus dispatching the process and putting it at the rear end of the ready queue. The scheduler will then jump to the next process in the queue.
The waiting time in round robin scheduling algorithm has been observed to be quite long at an average rate. 
- In this algorithm, not more than one time slice can be allocated to any process under any conditions in a row. 
- However, there is an exception to this if there is only one process to be executed.
- If the CPU burst is exceeded by the process, the process is put back at the tail of the queue after preemption.
- Thus, we can call this algorithm as a preemptive algorithm also. 
- The size of the time quantum greatly affects the performance of the round robin algorithm.
- If the size of the time quantum is kept too large then it resembles the FCFS algorithm. 
- On the other hand if the quantum is of too small size, then this RR approach is called the processor sharing approach. 
- An illusion is created in which it seems every process has its own processor that runs at the fraction of the speed of the actual processor. 
- Further, the effect of the context switching up on the performance of the RR scheduling algorithm.
- A certain amount of time is utilized in switching from one process to another. 
In this time the registers and the memory maps are loaded, a number of lists and tables are updated; memory cache is flushed and reloaded etc.
- Lesser the size of the time quantum, context switching will occur more number of times. 


Saturday, June 15, 2013

What is Process State Diagram?

In the systems where multiple processors or multitasking is involved, a process has to go through a number of states. In this article we shall discuss about these states. 
The kernel of the operating system may not recognize these states distinctly but still for the understanding of how the processes are executed they act as useful abstractions. 
These various states can be looked up in a process state diagram for a systematic view. This diagram shows the transitions of the process between various states with arrows. Processes can be stored both in the secondary or virtual memory and in the main memory as per the situation.

Process States

- The primary process states occur in all types of the systems. 
- Processes in these states are usually stored in the main memory. 
Basically there are 5 major states of any process as discussed below:

Ø  Created: 
- It is also known as the ‘new’ state. 
- A process occupies this state up on it creation. 
- While waiting for being admitted to the next ready state this process lies in this state. 
- The admission scheduler will decide whether to admit the process to next state or to delay it on a short or long term. 
- However, this admission is approved in an automatic way in most of the desktop computers. 
- But in the systems with real time operating systems this is not true. 
- Here, the admission might be delayed by a certain amount of time.
- If too many states are admitted to the ready state in a real time operating system, condition of over contention and over saturation might occur disabling the system to meet its deadlines.

Ø  Ready or Waiting: 
- This state is taken up a process when it has been loaded in to the physical memory of the system and is waiting to be executed by the processor or precisely waiting to be context switched by the dispatcher. 
- At any instant of time there might be a number of processes waiting for their execution. 
- Here the processes have to wait in a queue called the run queue out of which only one process will be taken up by a processor. 
- Processes that are waiting for obtaining input from some event are not put in to this ready queue.

Ø  Running: 
- When a process is selected by the CPU for execution, its state is changed to running. 
- One of the processors executes the instructions of the process one by one. 
Only one process can be run by the process at a time.

Ø Blocked: 
- When a process is blocked because of some event such as I/O operations is put in to the blocked state. 
- Another reason for a process being a blocked state can be its running out of the CPU time allocated to it.

Ø  Terminated: 
- Termination of a process may occur when either its execution is complete or it has been explicitly killed.
- This state is called the terminated or halted. 
- This process is called a zombie process if after coming in the terminated state it is not removed from the main memory. 

There are two additional states for supporting the virtual memory. In these states the process is usually stored in the secondary memory of the system:

Ø Swapped out or Waiting: 
- A process is said to be swapped out when it is removed from the primary memory and placed in the secondary memory. 
- This is done by the mid - term scheduler. 
- After this the state of this process changes to waiting.  

Ø  Swapped out and Blocked:
- In some cases, the processes that were in blocked state might be swapped out. 
- The same process might be again swapped in providing the conditions remain the same.



What is CPU Scheduling Criteria?

Scheduling is an essential concept that serves in the multitasking, multiprocessor and distributed systems. There are several schedulers available for this purpose. But these schedulers also require a criterion up on which they can decide how to schedule the processes. In this article we discuss about these scheduling criteria. Today a number of scheduling algorithms are available and all these have different properties. This is why these may work up on different scheduling criteria. Also the chosen algorithm may favor one class of processes more than the other.

What Criteria is used by algorithms for Scheduling?


Below mentioned are some of the criteria used by these algorithms for scheduling:
1. CPU utilization:
- It is a property of a good system to keep the CPU as busy as possible all the time.
- Thus, this utilization ranges from 0 percent to 100 percent.
- However, in the systems that are loaded lightly, the range is around 40 percent and for the systems heavily loaded it ranges around 90 percent.

2. Throughput:
- The work is said to be done if the CPU is busy with the execution of the processes.
- Throughput is one measure of CPU performance and can be defined as the number of processes being executed completely in a certain unit of time.
- For example, in short transactions throughput might range around like 10 processes per second.
- In longer transactions this may range around only one process being executed in one hour.

3. Turnaround time:
- This is an important criterion from the point of view of a process.
- This tells how much time the processor has taken for execution of  a processor.
- The turnaround time can be defined as the time duration elapsed from the submission of the process till its completion.

4. Waiting time:
- The amount of time taken for the process for its completion is not affected by the CPU scheduling algorithms.
- Rather, these algorithms only affects the time when the process is in waiting state.
- The time for which the process waits is called the waiting time.

5. Response time:
- The turnaround is not a good criterion in all the situations.
- The response time is favorable in the case of the interactive systems.
- It happens many a times that a process is able to produce the output in a fairly short time compared to the expected time.
- This process then can continue with the next instructions.
- The time taken for a process from its submission till production of the first response is calculated as the response time and is another criterion for the CPU scheduling algorithms.

All these are the primary performance criteria out of which one or more can be selected by a typical CPU scheduler. These criteria might be ranked by the scheduler depending up on their importance. One common problem in the selection of performance criteria is the possibility of conflict ion between them.
For example, increasing the number of active processes will increase the CPU utilization but at the same time will decrease the response time. This is often desirable to produce reduction in waiting time and turnaround time also. In a number of cases the average measure is optimized. But there are certain cases also where it is more beneficial to optimize the maximum or the minimum values.
It is not necessary that a scheduling algorithm that maximizes the throughput will decrease the turnaround time. Out of a mix of short and long jobs, if a scheduler runs only the short jobs, it will produce the best throughput. But at the same time the turnaround time for the long jobs will be so high which is not desirable.


Thursday, May 30, 2013

What are the various Desk Scheduling methods?

About Disk Scheduling

The I/O system has got the following layers:
  1. User processes: The functions of this layer including making I/O calls, formatting the I/O and spooling.
  2. Device independent software: Functions are naming, blocking, protection, allocating and buffering.
  3. Device drivers: Functions include setting up the device registers and checking their status.
  4. Interrupt handlers: These perform the function of waking up the I/O drivers up on the completion of the I/O.
  5. Hardware: Performing the I/O operations.
- Disk drives can be pictured as large 1 – D array consisting of logical blocks that are smallest unit of transfer.  
- These blocks are mapped in to the disk sectors in a sequential manner. 
Mapping is done in the same manner. 
- The responsibility of using the hardware efficiently is the duty of the operating system for the disk drives for increasing the speed of access and bandwidth of the disk. 

Algorithms for Scheduling Disk Requests

There are several algorithms existing for the scheduling of the disk requests:

Ø  SSTF: 
- In this method the request having the minimum seek time is selected from the present head position. 
- This method is a modification of the SJF (shortest job first) scheduling and therefore contains some possibility of process starvation.

Ø  SCAN: 
- From one end of the disk, the disk arm starts and continues in the direction of the other end, serving to the requests till the opposite end. 
- At this end the head is reversed and the process continues. 
- This is sometimes called as the elevator algorithm.

Ø  C – SCAN: 
- A better algorithm then the previous one. 
- This one offers a more uniform waiting time than the previous one. 
- The movement of the head is from one end to another while it services the requests encountered along the way. 
- However, the difference is that when it comes to the other it straightaway goes to the beginning without heeding to any of the requests in the way and then again starts. 
- The cylinders are treated as the circular list wrapped around last and the first cylinder.

Ø  C – Look: 
- This is a modified version of the C – SCAN. 
- Here the arm or the head travels only up to the last request rather than going till the far end. 
- Then immediately the direction is reversed and the process continues.

- For disk scheduling it is important that the method be selected as per the requirements only. 
- The first one is the most commonly used and appeals to the needs naturally. 
- For a system where often there is a heavy load on the disk, the SCAN and C- SCAN methods can help. 
- The number as well as the kind of requests affects the performance in a number of ways.
- On the other hand, the file – allocation method influences the requests for the disk services. 
- These algorithms have to be written as an individual module of the OS so that if required it can be replaced with a different one easily. 
- As a default algorithm, the LOOK or the SSTF is the most reasonable choice. 

Ways to attach to a disk

There are two ways of attaching the disk:
Ø  Network attached: This attachment is made via a network. This is called the network attached storage. All such connected storage devices together form the storage area network.
Ø  Host attached: This attachment is made via the I/O port.


All these disk scheduling methods are for the optimization of the secondary storage access and for making the whole system efficient. 


Sunday, May 26, 2013

Where are artificial neural networks applied?


The artificial neural networks have been applied to a number of problems in diverse fields such as engineering, finance, medical, physics, medicine, and biology and so on. 
- All these applications are based on the fact that these neural networks can simulate the human brain capabilities. 
- They have found a potential use in classification and prediction problems. 
These networks can be classified under the non-linear data driven self adaptive approaches. 
They come handy as a powerful tool when the underlying data relationship is not known. 
- They find it easy to recognize and learn the patterns and can correlate between the input sets and the result values.
- Once the artificial neural networks have been trained, they can be used in the prediction of the outcomes of the data. 
- They can even work when the data is not clear i.e., when it is noisy and imprecise. 
- This is the reason why they prove to be an ideal tool for modeling the agricultural data which is often very complex. 
- Their adaptive nature is their most important feature.
- It is because of this feature that the models developed using ANN is quite appealing when the data is available but there is a lack of understanding of the problem.
- These networks are particularly useful in those areas where the statistical methods can be employed. 
- They have uses in various fields:

    1. Classification Problems:
a)   Identification of underwater sonar currents.
b)   Speech recognition
c)   Prediction of the secondary structure of proteins.
d)   Remote sensing
e)   Image classification
f)    Speech synthesis
g)   ECG/ EMG/ EEG classification
h)   Data mining
i)     Information retrieval
j)    Credit card application screening

  1. Time series applications:
a)   Prediction of stock market performance
b)   ARIMA time – series models
c)   Machine robot/ control manipulation
d)   Financial, engineering and scientific time series forecasting
e)   Inverse modeling of vocal tract

  1. Statistical Applications:
a)   Discriminant analysis
b)   Logistic regression
c)   Bayes analysis
d)   Multiple regression

  1. Optimization:
a)   Multiprocessor scheduling
b)   Task assignment
c)   VLSI routing

  1. Real world Applications:
a)   Credit scoring
b)   Precision direct mailing

  1. Business Applications:
a)   Real estate appraisal
b)   Credit scoring: It is used for determining the approval of a load as per the applicant’s information.
c)   Inputs
d)   Outputs

  1. Mining Applications
a)   Geo-chemical modeling using neural pattern recognition technology.

  1. Medical Applications:
a) Hospital patient stay length prediction system: the CRTS/ QURI system was developed using a neural network for predicting the number of days a patient has to stay in hospital. The major benefit of this system was that money was saved and better patient care. This system required the following 7 inputs:
Ø  Diagnosis
Ø  Complications and comorbidity
Ø  Body systems involved
Ø  Procedure codes and relationships
Ø  General health indicators
Ø  Patient demographics
Ø  Admission category

  1. Management Applications: Jury summoning prediction: a system was developed that could predict the number of jurors that were actually required. Two inputs were supplied: the type of case and judge number. The system is known to have saved around 70 million.
  2. Marketing Application: A neural network was developed for improving the direct mailing response rate. This network selected those individuals who were likely to respond to the 2nd mailing. 9 variables were given as the input. It saved around 35 % of the total mailing cost.
  3. Energy cost prediction: A neural network was developed that could predict the price of natural gas for the next month. It achieved an accuracy of 97%. 


Facebook activity