Subscribe by Email


Showing posts with label Scheduling. Show all posts
Showing posts with label Scheduling. Show all posts

Wednesday, June 19, 2013

Explain the Priority CPU scheduling algorithm

A number of scheduling algorithms are available today and all are appropriate for some different kinds of scheduling environments. In this article we give a brief explanation about the ‘priority CPU scheduling algorithm’. 

For those who are not familiar with this scheduling algorithm, a special case of the priority algorithm is the shortest job first scheduling algorithm (SJF). 

- This algorithm involves associating a priority with each and every thread or process. 
- Out of all the processes, the one with the highest priority is chosen and given to the processor for execution. 
- Thus, it is decided by the priority that which process has to be executed first. 
There are cases when the two or more processes might have the same priority. 
- In such case FCFS (first come first served) scheduling algorithm is applied. 
The process first in the queue is then executed first. 
- The SJF is essentially a modification of the priority algorithm. 
- Here, the priority of a process (indicated by p) is simply taken as the inverse of the following CPU burst as predicted. 
- This implies if a process is having a large CPU burst, then its priority will be low accordingly and similarly if the CPU burst is small, the priority will be high. 
Numbers in some fixed range are used for indicating the priorities such as from 0 to 4095 or from 0 to 7 etc. 
- One thing to be noted is that there has been no general agreement up on whether the number 0 indicates lowest priority or highest priority.
- In some systems the lower priorities are indicated by the low numbers while in some systems low numbers mean higher priorities. 
- The latter case i.e., using low numbers for representing high priorities is more common.
For example, consider the 5 processes P1, P2, P3, P4 and P5 having CPU burst as 10, 1, 2, 1, 5 respectively and priority also respectively as 3, 1, 4, 5, 2. Using the priority scheduling algorithms, the processes will be executed in the following order:
P2, P5, P1, P3, P4

There are two ways of defining the priorities i.e., either externally or internally. This gives two types of priorities:

  1. Internally defined priorities: These priorities make use of some quantities that can be measured for computing a process’s priority. These quantities include memory requirements, time limits, ration of the I/O burst and CPU burst, number of files and so on.
  2. Externally defined priorities: These priorities are defined by some criteria that are external to the operating system. Such factors include political factors, department leading the work; importance of the process, amount of money paid and so on.
The priority scheduling can be itself divided in to two types namely non – preemptive or preemptive. The priority of the process waiting in the ready queue is compared with that of the executing process.

Ø  Preemptive priority scheduling: Here the CPU is preempted if the waiting process has a priority higher than that of the currently executing process.
Ø  Non – preemptive priority scheduling: Here the new process will be let waiting in the ready queue till the execution of the current process is complete.


Starvation or the indefinite blocking presents a major problem in priority scheduling. A process will be considered blocked if it is ready for execution but has to wait for CPU. It is very likely that the low priority processes will be left waiting indefinitely for CPU. In a system that is heavily loaded most of the time, if the number of high priority processes is large, the low priority processes will be prevented from getting processor. 


Monday, June 17, 2013

Explain the Round Robin CPU scheduling algorithm

There are number of CPU scheduling algorithms, all having different properties thus making them appropriate for different conditions. 
- Round robin scheduling algorithm or RR is commonly used in the time sharing systems. 
- This is the most appropriate scheduling algorithm for time sharing operating systems. 
- This algorithm shares many similarities with the FCFS scheduling algorithm but there is an additional feature to it. 
- This feature is preemption in the context switch occurring between two processes. 
- In this algorithm a small unit of time is defined and is termed as the time slice or the time quantum. 
- These time slices or quantum range from 10ms to 100 ms.
- The ready queue in round robin scheduling is implemented in a circular fashion. 

How to implement Round Robin CPU scheduling algorithm

Now we shall see about the implementation of the round robin scheduling:
  1. The ready queue is maintained as the FIFO (first in first out) queue of the processes.
  2. Addition of new processes is made at the rear end of the ready queue and selection of the process for execution by the processor is made at the front end.
  3. The process first in the ready queue is thus picked by the CPU scheduler. A timer is set that will interrupt the processor when the time slice elapses. When this happens the process will be dispatched.
  4. In some cases the CPU burst of some processes may be less than the size of the time slice. If this is the case, the process will be voluntarily released by the CPU. The scheduler will then jump to the process next in the ready queue and fetch it for execution.
  5. While in other cases the CPU burst for some processes might be higher than the size of the time slice. In this case the timer set will send an interrupt to the processor, thus dispatching the process and putting it at the rear end of the ready queue. The scheduler will then jump to the next process in the queue.
The waiting time in round robin scheduling algorithm has been observed to be quite long at an average rate. 
- In this algorithm, not more than one time slice can be allocated to any process under any conditions in a row. 
- However, there is an exception to this if there is only one process to be executed.
- If the CPU burst is exceeded by the process, the process is put back at the tail of the queue after preemption.
- Thus, we can call this algorithm as a preemptive algorithm also. 
- The size of the time quantum greatly affects the performance of the round robin algorithm.
- If the size of the time quantum is kept too large then it resembles the FCFS algorithm. 
- On the other hand if the quantum is of too small size, then this RR approach is called the processor sharing approach. 
- An illusion is created in which it seems every process has its own processor that runs at the fraction of the speed of the actual processor. 
- Further, the effect of the context switching up on the performance of the RR scheduling algorithm.
- A certain amount of time is utilized in switching from one process to another. 
In this time the registers and the memory maps are loaded, a number of lists and tables are updated; memory cache is flushed and reloaded etc.
- Lesser the size of the time quantum, context switching will occur more number of times. 


Saturday, June 15, 2013

What is CPU Scheduling Criteria?

Scheduling is an essential concept that serves in the multitasking, multiprocessor and distributed systems. There are several schedulers available for this purpose. But these schedulers also require a criterion up on which they can decide how to schedule the processes. In this article we discuss about these scheduling criteria. Today a number of scheduling algorithms are available and all these have different properties. This is why these may work up on different scheduling criteria. Also the chosen algorithm may favor one class of processes more than the other.

What Criteria is used by algorithms for Scheduling?


Below mentioned are some of the criteria used by these algorithms for scheduling:
1. CPU utilization:
- It is a property of a good system to keep the CPU as busy as possible all the time.
- Thus, this utilization ranges from 0 percent to 100 percent.
- However, in the systems that are loaded lightly, the range is around 40 percent and for the systems heavily loaded it ranges around 90 percent.

2. Throughput:
- The work is said to be done if the CPU is busy with the execution of the processes.
- Throughput is one measure of CPU performance and can be defined as the number of processes being executed completely in a certain unit of time.
- For example, in short transactions throughput might range around like 10 processes per second.
- In longer transactions this may range around only one process being executed in one hour.

3. Turnaround time:
- This is an important criterion from the point of view of a process.
- This tells how much time the processor has taken for execution of  a processor.
- The turnaround time can be defined as the time duration elapsed from the submission of the process till its completion.

4. Waiting time:
- The amount of time taken for the process for its completion is not affected by the CPU scheduling algorithms.
- Rather, these algorithms only affects the time when the process is in waiting state.
- The time for which the process waits is called the waiting time.

5. Response time:
- The turnaround is not a good criterion in all the situations.
- The response time is favorable in the case of the interactive systems.
- It happens many a times that a process is able to produce the output in a fairly short time compared to the expected time.
- This process then can continue with the next instructions.
- The time taken for a process from its submission till production of the first response is calculated as the response time and is another criterion for the CPU scheduling algorithms.

All these are the primary performance criteria out of which one or more can be selected by a typical CPU scheduler. These criteria might be ranked by the scheduler depending up on their importance. One common problem in the selection of performance criteria is the possibility of conflict ion between them.
For example, increasing the number of active processes will increase the CPU utilization but at the same time will decrease the response time. This is often desirable to produce reduction in waiting time and turnaround time also. In a number of cases the average measure is optimized. But there are certain cases also where it is more beneficial to optimize the maximum or the minimum values.
It is not necessary that a scheduling algorithm that maximizes the throughput will decrease the turnaround time. Out of a mix of short and long jobs, if a scheduler runs only the short jobs, it will produce the best throughput. But at the same time the turnaround time for the long jobs will be so high which is not desirable.


Thursday, May 30, 2013

What are the various Desk Scheduling methods?

About Disk Scheduling

The I/O system has got the following layers:
  1. User processes: The functions of this layer including making I/O calls, formatting the I/O and spooling.
  2. Device independent software: Functions are naming, blocking, protection, allocating and buffering.
  3. Device drivers: Functions include setting up the device registers and checking their status.
  4. Interrupt handlers: These perform the function of waking up the I/O drivers up on the completion of the I/O.
  5. Hardware: Performing the I/O operations.
- Disk drives can be pictured as large 1 – D array consisting of logical blocks that are smallest unit of transfer.  
- These blocks are mapped in to the disk sectors in a sequential manner. 
Mapping is done in the same manner. 
- The responsibility of using the hardware efficiently is the duty of the operating system for the disk drives for increasing the speed of access and bandwidth of the disk. 

Algorithms for Scheduling Disk Requests

There are several algorithms existing for the scheduling of the disk requests:

Ø  SSTF: 
- In this method the request having the minimum seek time is selected from the present head position. 
- This method is a modification of the SJF (shortest job first) scheduling and therefore contains some possibility of process starvation.

Ø  SCAN: 
- From one end of the disk, the disk arm starts and continues in the direction of the other end, serving to the requests till the opposite end. 
- At this end the head is reversed and the process continues. 
- This is sometimes called as the elevator algorithm.

Ø  C – SCAN: 
- A better algorithm then the previous one. 
- This one offers a more uniform waiting time than the previous one. 
- The movement of the head is from one end to another while it services the requests encountered along the way. 
- However, the difference is that when it comes to the other it straightaway goes to the beginning without heeding to any of the requests in the way and then again starts. 
- The cylinders are treated as the circular list wrapped around last and the first cylinder.

Ø  C – Look: 
- This is a modified version of the C – SCAN. 
- Here the arm or the head travels only up to the last request rather than going till the far end. 
- Then immediately the direction is reversed and the process continues.

- For disk scheduling it is important that the method be selected as per the requirements only. 
- The first one is the most commonly used and appeals to the needs naturally. 
- For a system where often there is a heavy load on the disk, the SCAN and C- SCAN methods can help. 
- The number as well as the kind of requests affects the performance in a number of ways.
- On the other hand, the file – allocation method influences the requests for the disk services. 
- These algorithms have to be written as an individual module of the OS so that if required it can be replaced with a different one easily. 
- As a default algorithm, the LOOK or the SSTF is the most reasonable choice. 

Ways to attach to a disk

There are two ways of attaching the disk:
Ø  Network attached: This attachment is made via a network. This is called the network attached storage. All such connected storage devices together form the storage area network.
Ø  Host attached: This attachment is made via the I/O port.


All these disk scheduling methods are for the optimization of the secondary storage access and for making the whole system efficient. 


Sunday, May 19, 2013

What are different types of schedulers and their workings?


Scheduling is an important part of the working of operating systems. 
- The scheduler is the component that provides access to the resources to the processes, threads and data flows. 
- These resources may include time of the processor and the communications bandwidth. 
- Scheduling is necessary for effectively balancing the load of the system and achieving the target of QoS or quality of service. 
- Scheduling is also necessary for the systems that do multitasking and multiplexing on a single processor since they need to divide the CPU time between many processes. 
- In multiplexing, it is required for timing the simultaneous transmission of the multiple flows.

Important things about Scheduler

There are 3 things which most concern the scheduler:
  1. Throughput
  2. Latency inclusive of the response time and the turnaround time
  3. Waiting time or the fairness time
- But when practically implemented, conflicts arise between these goals for example between latency and throughput. 
- It is the scheduler that can make a compromise between any two goals. 
Based on the user’s requirements and the objectives it is decided to which goal the preference has to be given. 
- In systems such as the embedded systems and robotics that operate in real time environment, it has to be ensured by the scheduler that the processes are capable of meeting the deadlines. 
- This is a very critical factor in maintaining the stability of the system. 
- The administrative back end is used for managing the scheduled tasks that are then sent to the mobile devices.  

Types of Schedulers

There are 3 different types of schedulers available which we discuss below:

Long term Schedulers or Admission Schedulers: 
- The purpose of this type of scheduler is to decide about the processes and jobs to be admitted or added to the ready queue. 
- When a program makes an attempt for executing a process, it is the responsibility of the long – term scheduler to delay or authorize the request for admitting the process to the ready queue. 
- Thus, what all processes will be executed by the system is dictated by this scheduler. 
- It also dictates about the degree of the concurrency and handling of the CPU intensive and I/O intensive processes. 
- Modern operating systems use this for making sure that there is enough time for the processes to finish of their tasks. 
- Modern GUIs would be of very less use if there was no real time scheduling. 
The long term queue resides in the secondary memory.

Medium term Schedulers: 
- This scheduler serves the purpose of removing the processes from the physical memory and placing them in the virtual memory and even vice versa. 
This process is called swapping out and swapping in. 
- A process that has been inactive for some time might be swapped by the scheduler. 
- It may also swap a process with frequent page faulting, low priority or more amount of memory etc. 
- This is necessary since this makes the space available for other processes.

Short term Schedulers: 
- These schedulers are more commonly known as the CPU schedulers.
- It decides which one out of all the processes will be executed after the clock interrupt, a system call, an I/O interrupt, hardware interrupt and so on. 
- Thus, we can say that the frequency of the short term schedulers of making decisions is much higher than that of the long term and medium term schedulers since after every time slice these schedulers have to decide.
There is one more component that is involved in CPU scheduling but is not counted under schedulers. It is called dispatcher. 


Saturday, May 4, 2013

What is Context Switch?


- The context switch refers to the process that involves storing and restoring of the context or the state of the process. 
- This makes it possible to resume the execution of the process from that same saved point in the future. 
- This is very important as it has enabled the various processes for sharing one CPU and therefore it represents one of the essential features of an operating system that is capable of multi – tasking. 
- It is the operating system and the processors which decide what will constitute the context. 
- One of the major characteristic of the context switches is that they are computationally very intensive.
- Most of the designing of the operating systems is concerned with the optimization of the use of these switches. 
- A finite amount of time is required for switching from one process to another one. 
- This time is spent in the administration of the process which includes saving and loading of the memory maps, registers etc. plus the various lists and tables are updated. 
- A context switch may mean either of the following:
Ø  A register context switch
Ø  A task context switch
Ø  A thread context switch
Ø  A process context switch

Potential Triggers for a Context Switch

There are three potential triggers for a context switch. A switch can be triggered in any of the three conditions:

1. Multi-tasking: 
- It is common that one process has to be switched out of the processor so as to execute another process. 
- This is done by the use of some scheduling scheme. 
- Here, if the process makes itself un-executable, then it can trigger this context switch. 
- The process can do this by waiting for synchronization or an I/O operation to finish. 
- On a multitasking system that uses pre-emptive scheduling, the processes that are still executable might be switched out by the scheduler. 
- A timer interrupt is employed by some of the preemptive schedulers to avoid process starving of the CPU time.
- This interrupt gets triggered when the time slice is exceeded by the process. - Furthermore, this interrupt makes sure that the scheduler will be able to gain control for switching.

2. Interrupt handling: 
- Modern architectures are driven by the interrupts. 
- This implies that the CPU can issue the request while continuing with some other execution and without waiting for the current read/ write operation to get over. 
- When the currently executing operation is over, the interrupt fires and presents the result to the CPU. 
- Interrupt handler is used for handling the interrupts. 
- The interrupts are handled by this program directly from the disk. 
- A part of the context is automatically switched by the hardware up on the occurrence of an interrupt. 
- This context is enough for the handling program to go back to the code that raised the interrupt.
- The additional context might be saved by the handler as per the details of both the software and hardware designs. 
- Usually, only a required small context’s part is changed so as to keep the amount of time required for handling as minimum as possible. 
- Kernel does not take part in scheduling a process that would handle the interrupts.

3. User and kernel mode switching: 
- A context switch is not required for making a transition between the kernel mode and the user mode in an operating system.
- A mode transition in itself cannot be considered to be a context switch. 
- But, it depends on the OS whether or not the context switch will take place. 


Friday, May 3, 2013

What is a Dispatcher?


A number of types of schedulers are available that suit the different needs of different operating systems. Presently, there are three categories of the schedulers:
  1. Long-term schedulers
  2. Medium-term schedulers
  3. Short-term schedulers
Apart from the schedulers there is one more component involved in the scheduling process and is known as the dispatcher. 
- It is the dispatcher that gives a process power to control the CPU. 
- To which process this control is to be given is selected by the short-term scheduler. 
- This whole process involves the following three steps:
  1. Switching the context
  2. Turning on the user code
  3. Making a jump to the exact location of the program from where it has to be restarted.
- Values taken from the program counter are analyzed by the dispatcher and accordingly it fetches instructions and feeds data in to the registers. 
- The dispatcher unlike the other system components needs to be very quick since it is invoked during all the switches that occur. 
- Whenever a context switch is invoked, the processor gets in to an idle state for a very small period of time. 
- Hence, it is required that the context switches that are not necessary might be avoided. 
- The dispatcher takes some time for stopping one process and start running the other one. 
- The dispatch latency is what we call this time.

- Scheduling and dispatch are complex processes and interrelation to each other. 
- These two are very much essential for the operation of the operating system. 
Today, architectural extensions are available for the modern processors that provide a number of banks of registers.
- Swapping of these registers in hardware is possible and therefore a certain number of tasks are capable of retaining their full registers. 
- Whenever an interrupt triggers the dispatcher, it sends to it the full set of the registers belonging to the process that was being executed at the time of occurrence of the interrupt. 
- Here, the program counter is not considered. 
- Therefore, it is important that the dispatcher should be written carefully for storing the present states of the registers on being triggered. 
- In other words, we can say that for the dispatcher itself, there is no immediate context. 
- This saves it from the same problem. 

Process of Dispatcher

Below we try to describe in simple words what actually the process is.
  1. The program presently having the context is executed by the processor. Things used by this program include stack base, flags, program counter, and registers and so on. There is a possible exception of the reserved register that is native to the operating system. The executing program does not have any knowledge regarding the dispatcher.
  2. For dispatcher a timed interrupt is triggered. Here the program counter jumps to the address listed in the BIOS interrupt. This marks the beginning of the execution of the dispatch sub routine. The dispatcher then deals with the stacks and the registers etc. of the program that raised the interrupt.
  3. Dispatcher like the other programs consists of some sets of instructions that operate up on the register of the current program. These instructions know everything of the previously executed programs. Out of these, the first few instructions are responsible for storing the state of the program.
  4. Dispatcher next determines which program should be given the CPU next for executing. Now it deletes all the statistics of the previously executed state and fills in the details of the next process to be executed.
  5. Dispatcher jumps to the address mentioned in the program counter and establishes a full context on the processor.
- Actually dispatcher does not really require registers since its only work is to write the current state of the CPU in to a memory location that has been predetermined. 
- It then loads in to the CPU another process from other predetermined location. 


Thursday, May 2, 2013

What is a CPU Scheduler?


Scheduling is a very important concept when it comes to the multi-tasking operating systems. 
- It is the method via which the data flows, threads and processes are provided access to the shared resources of the computer system. 
- These resources include communications bandwidth, processor time, and memory and so on. 
- Scheduling is important as it helps in striking a balance the system processor and its resources effectively. 
- It helps in achieving the target QoS or quality of service. 
But what gave rise to scheduling? 
- Almost all the modern systems require to carry out multiple tasks i.e., multi-tasking and multiplexing as well which require a scheduling algorithm. 
Multiplexing means transmission of the multiple data flows at the same time. - There are some other things also with which the scheduler is concerned. They are:
  1. Throughput: It is the ratio of total number of processes executed to a given amount of time.
  2. Latency: This factor can be sub – divided in to two namely response time and the turnaround time. Response time is the time taken from the submission of the process till its output is produced by the processor. The latter i.e., the turnaround time is the time period elapsed between the process submission and its completion.
  3. Waiting/ fairness time: This is the equal CPU time given to each process or we can say that the time is allocated as per the priority of the processes. The time for which the processes wait in the ready queue is also counted in this.
- But in practical, conflicts may arise between these goals such as in case of latency versus throughput. 
- If such a case occurs, a suitable compromise has to be implemented by the scheduler. 
- The needs and the objectives of the user are used for deciding to who (of the above concerns) the preference is to be given. 
- In robotics or embedded systems i.e., in the real time environments, it becomes a duty of the scheduler for ensuring that all the processes meet their deadlines. 
- This is important for maintaining the stability of the system. 
- The mobile devices are given the scheduled tasks which are then managed by an administrative back end.

Types of CPU Schedulers

There are many types of CPU schedulers as discussed below:
1. Long-term Schedulers: 
- These schedulers facilitate the long term scheduling and are also known as the high level schedulers and the admission schedulers. 
- It is up to them to determine which processes and jobs are to be sent to the ready queue. 
- When the CPU makes an attempt for executing a program, the long term scheduler has the right to decide whether this program will be admitted to the currently executing set of processes. 
- Thus, it is dictated by this scheduler what processes are to be run and the extent of the concurrency has to be there.
- It also decides what amounts of processes have to be concurrently executed. 
It also decides the handling of the split between the CPU intensive and I/O processes.  

2. Medium-term Schedulers: 
- The processes are temporarily removed from the main memory and placed up on the secondary memory by this scheduler. 
- This process is called “swapping in” and “swapping out”. 
- Usually this scheduler swaps out the following processes:
a)   processes that have been inactive since some time
b)   the processes that has raised a frequent page faulting
c)   processes having a low priority
d) processes that take up large memory chunks for releasing the main memory to other processes
- The scheduler later swaps in these processes whenever sufficient memory is available and if the processes are unblocked and not in waiting state.

3. Short-term Schedulers: 
It takes decision regarding the processes to be executed after clock interrupt. 


Saturday, April 20, 2013

Explain the concepts of threads and processes in operating system?


Threads and processes are an important part of the operating systems that have features of multi–tasking and parallel programming. These come under the sole concept of ‘scheduling’. Let us try to understand these concepts with the help of an analogy.

- Consider the process to be a house and threads are its occupants. 
- Then, process is like a container having many attributes. 
- These attributes can be compared to that of a house such as number of rooms, floor space and so on. 
- Despite having so many attributes, this house is a passive thing which means it can’t perform anything on its own. 
- The active elements in this situation are the occupants of the home i.e., the threads. 
- The various attributes of the house are actually used by them. 
- Since you too live in a house you must have got an idea how it actually works and behaves. 
- You do whatever you like in the house if only you are there. 
- What if another person starts living with you? You just can’t do anything you want to do. 
- You cannot use the washroom without making sure that the other person is not there. 
- This can be related to multi – threading. 
- Just as a part of estate is occupied by the house, an amount of memory is occupied by the process. 
- Just as the occupants are allowed to freely access anything in the house, similarly the occupied memory is utilized by the threads that are a part of that process i.e., the access to memory is common. 
- If one process allocates some memory, it can be accessed by all other threads also. 
- If such a thing is happening, it has to be made sure that from all the threads, the access to the memory is synchronized. 
- If it cannot be synchronized, then it becomes clear that the memory has been allocated specifically to a thread. 
- But in actual, things are a lot more complicated because at some point of time everything has to be shared. 
- If one thread wants to use some resource that is already under use by some other thread, than that thread has to follow the concept of mutual exclusion. 
An object known as the mutex is used by the thread for achieving exclusive access to that resource. 
- Mutex can be compared to a door lock. 
- Once a thread locks this, no other thread can use that resource until the mutex is again unlocked by that thread. 
- Mutex is one resource that a thread uses. 
- Now, suppose there are many threads waiting to use the resource when mutex is unlocked, the question that arises now is that who will be next one to use the resource. 
- This problem can be solved by either deciding on the basis of length of wait or on basis of priority. 
- Suppose there is a location that can be accessed by more than one threads simultaneously.
- You want to have only a limited number of threads using that memory location at any given point of time. 
- This problem cannot be solved by mutex but with another resource called semaphore. 
- Semaphore with a count of 1 is the resource that can only be used by one thread at a time. 
- In semaphore of greater count more threads can access it simultaneously.  
- It just depends up on how you characterize or set the lock.


Wednesday, April 17, 2013

What are Real-time operating systems?


- The RTOS or a real time operating system was developed with the intention of serving the application requests that occur in real time. 
- This type of operating system is capable of processing the data as and when it comes in to the system. 
- This it does without making any buffering delays. 
- The time requirements are processed in 10ths of seconds or even on much smaller scale. 
A key characteristic feature of the real operating system is that the amount of time they take for accepting and processing a given task remains consistent. 
- The variability is so less that it can be ignored totally.

Real time operating systems also there are two types as stated below:
  1. The soft real –time operating system: It produces more jitter.
  2. The hard real – time operating system: It produces less jitter when compared to the previous one.
- The real time operating systems are driven by the goal of giving guaranteed hard or soft performance rather than just producing a high throughput. 
- Another distinction between these two operating systems is that the soft real time operating system can generally meet deadline whereas the hard real time operating system meets a deadline deterministic ally.
- For the scheduling purpose, some advance algorithms are used by these operating systems. 
- Flexibility in scheduling has many advantages to offer such as the cso (computer system orchestration) of the process priorities becomes wider.
- But a typical real time OS dedicates itself to a small number of applications at a time. 
- There are 2 key factors in any real –time OS namely:
  1. Minimal interrupt latency and
  2. Minimal thread switching latency.
- Two types of design philosophies are followed in designing the real  time Oss:
  1. Time sharing design: As per this design, the tasks are switched based up on a clocked interrupt and events at regular intervals. This is also termed as the round robin scheduling.
  2. Event – driven design: As per this design, the switching occurs only when some other event demands higher priority. This is why it is also termed as priority scheduling or preemptive priority.
- In the former designs, the tasks are switched more frequently than what is strictly required but it proves to be good at providing a smooth multi – tasking experience. 
- This gives the user an illusion that he/ she is solely using the machine. 
- The earlier designs of CPU forced us to have several cycles for switching a task and while switching it could not perform any other task. 
- This was the reason why the early operating systems avoided unnecessary switching in order to save the CPU time. 
- Typically, in any design there are 3 states of a task:
  1. Running or executing on CPU
  2. Ready to be executed
  3. Waiting or blocked for some event
- Many of the tasks are kept in the second and third states because at a time the CPU can perform only one task. 
- The number of tasks waiting to be executed in the ready queue may vary depending on the running applications and the scheduler type being used by the CPU. 
- On multi – tasking systems that are non – preemptive, one task might have to give up its CPU time to let the other tasks to be executed. 
- This leads to a situation called the resource starvation i.e., the number of tasks to be executed is more and the resources are less.


Facebook activity