Subscribe by Email


Showing posts with label Waiting time. Show all posts
Showing posts with label Waiting time. Show all posts

Thursday, May 2, 2013

What is a CPU Scheduler?


Scheduling is a very important concept when it comes to the multi-tasking operating systems. 
- It is the method via which the data flows, threads and processes are provided access to the shared resources of the computer system. 
- These resources include communications bandwidth, processor time, and memory and so on. 
- Scheduling is important as it helps in striking a balance the system processor and its resources effectively. 
- It helps in achieving the target QoS or quality of service. 
But what gave rise to scheduling? 
- Almost all the modern systems require to carry out multiple tasks i.e., multi-tasking and multiplexing as well which require a scheduling algorithm. 
Multiplexing means transmission of the multiple data flows at the same time. - There are some other things also with which the scheduler is concerned. They are:
  1. Throughput: It is the ratio of total number of processes executed to a given amount of time.
  2. Latency: This factor can be sub – divided in to two namely response time and the turnaround time. Response time is the time taken from the submission of the process till its output is produced by the processor. The latter i.e., the turnaround time is the time period elapsed between the process submission and its completion.
  3. Waiting/ fairness time: This is the equal CPU time given to each process or we can say that the time is allocated as per the priority of the processes. The time for which the processes wait in the ready queue is also counted in this.
- But in practical, conflicts may arise between these goals such as in case of latency versus throughput. 
- If such a case occurs, a suitable compromise has to be implemented by the scheduler. 
- The needs and the objectives of the user are used for deciding to who (of the above concerns) the preference is to be given. 
- In robotics or embedded systems i.e., in the real time environments, it becomes a duty of the scheduler for ensuring that all the processes meet their deadlines. 
- This is important for maintaining the stability of the system. 
- The mobile devices are given the scheduled tasks which are then managed by an administrative back end.

Types of CPU Schedulers

There are many types of CPU schedulers as discussed below:
1. Long-term Schedulers: 
- These schedulers facilitate the long term scheduling and are also known as the high level schedulers and the admission schedulers. 
- It is up to them to determine which processes and jobs are to be sent to the ready queue. 
- When the CPU makes an attempt for executing a program, the long term scheduler has the right to decide whether this program will be admitted to the currently executing set of processes. 
- Thus, it is dictated by this scheduler what processes are to be run and the extent of the concurrency has to be there.
- It also decides what amounts of processes have to be concurrently executed. 
It also decides the handling of the split between the CPU intensive and I/O processes.  

2. Medium-term Schedulers: 
- The processes are temporarily removed from the main memory and placed up on the secondary memory by this scheduler. 
- This process is called “swapping in” and “swapping out”. 
- Usually this scheduler swaps out the following processes:
a)   processes that have been inactive since some time
b)   the processes that has raised a frequent page faulting
c)   processes having a low priority
d) processes that take up large memory chunks for releasing the main memory to other processes
- The scheduler later swaps in these processes whenever sufficient memory is available and if the processes are unblocked and not in waiting state.

3. Short-term Schedulers: 
It takes decision regarding the processes to be executed after clock interrupt. 


Tuesday, April 23, 2013

What is Throughput, Turnaround time, waiting time and Response time?


In this article we discuss about four important terms that we often come across while dealing with processes. These 4 factors are:
1.  Throughput
2.  Turnaround Time
3. Waiting Time
4.  Response time

What is Throughput?

- In communications networks like packet radio, Ethernet etc., throughput refers to the rate of the successful delivery of data over the channel. 
- The data might be delivered via either logical link or physical link depending on the type of communication that is being used. 
- This throughput is measured in the terms of bps or bits per second or data packets per slot. 
- Another term common in networks performance is the aggregate throughput or the system throughput. 
- This equals to the sum of all the data rates at which the data is delivered to each and every terminal in a network. 
- In computer systems, throughput means the rate of successful completion of the tasks by the CPU in a specific period of time. 
- Queuing theory is used for the mathematical analyzation of the throughout. 
There is always a synonymy between the digital bandwidth consumption and the throughput. 
- Another related term is the maximum throughput.This bears synonymy with the digital bandwidth capacity.

What is Turnaround Time?

- In computer systems, the total time taken by the CPU from submission of a task or thread for execution to its completion is referred to as the turnaround time. 
- The turnaround time varies depending on the programming language used and the developer of the software.
- It deals with the whole amount of time taken for delivering the desired output to the end user following the start of the task completion process. 
- This is counted among the metrics that are used for the evaluation of the scheduling algorithms used by the operating systems. 
- When it comes to the batch systems, the turnaround time is more because of the time taken in the formation of the batches, executing and returning the output.

What is Waiting Time?

 
- This is the time duration between the requesting of an action and when it occurs. 
- Waiting time depends up on the speed and make of the CPU and the architecture that it uses. 
- If the processor supports pipeline architecture, then the process is said to be waiting in the pipe. 
- When the current task in processor is completed, the waiting task is passed on to the CPU for execution. 
- When the CPU starts executing this task, the waiting period is said to be over. 
- The status of the task that is waiting is set to ‘waiting’. From waiting status, it changes to active and then halts.

What is Response Time?

 
- The time taken by the computer system or the functional unit for reacting or responding to the input supplied is called the response time. 
- In data processing, there are various situations for which the user would perceive the response time:
Ø  Time between operator entering a request at a terminal  and
Ø  The instant at which appears the first character of the response.
- Coming to the data systems, the response time can be defined as the time taken from the receipt of EOT (end of transmission) of a message inquiry and start of the transmission in response to that inquiry. 
- Response is an important concept in the real time systems and it is the time that elapses between the dispatch of the request until its completion. 
- However, one should not confuse response time with the WCET.
- It is the maximum time taken by the execution of the task without any interference. 
- Response time also differs from the deadline. 
- Deadline is the time for which the output is valid. 


Wednesday, August 26, 2009

Shortest-Job-First (SJF) Scheduling

Shortest-Job-First (SJF) is a non-preemptive discipline in which waiting job (or process) with the smallest estimated run-time-to-completion, is run next. In other words, when CPU is available, it is assigned to the process that has smallest next CPU burst. The SJF scheduling is especially appropriate for batch jobs for which the run times are known in advance. Since the SJF scheduling algorithm gives the minimum average time for a given set of processes, it is probably optimal. The SJF algorithm favors short jobs (or processors) at the expense of longer ones.

Example :
Process Burst time Arrival
P1 6 0
P2 8 0
P3 7 0
P4 3 0
Gantt chart: Order P1, P2, P3, P4
| P4 | P1 | P3 | P2 |
0 3 9 16 24
Average waiting time: (0+3+16+9)/4 = 7
With FCFS: (0+6+(6+8)+(6+8+7))/4 = 10.25

Problem: SJF minimizes the average wait time because it services small processes before it services large ones. While it minimizes average wiat time, it may penalize processes with high service time requests. If the ready list is saturated, then processes with large service times tend to be left in the ready list while small processes receive service. In extreme case, where the system has little idle time, processes with large service times will never be served. This total starvation of large processes may be a serious liability of this algorithm.


Facebook activity