- The RTOS
or a real time operating system was developed with the intention of serving the application requests
that occur in real time.
- This type of operating system is capable of processing the data as and when it comes in to the system.
- This it does without making any
buffering delays.
- The time requirements are processed in 10ths of seconds or
even on much smaller scale.
- A key
characteristic feature of the real operating system is that the amount of time
they take for accepting and processing a given task remains consistent.
- The
variability is so less that it can be ignored totally.
Real time operating systems also there are two types as stated below:
- The soft real –time operating system: It
produces more jitter.
- The hard real – time operating system: It
produces less jitter when compared to the previous one.
- The real time operating systems are driven by the goal of giving guaranteed hard or
soft performance rather than just producing a high throughput.
- Another
distinction between these two operating systems is that the soft real time
operating system can generally meet deadline whereas the hard real time
operating system meets a deadline deterministic ally.
- For the scheduling purpose, some advance algorithms are used by these operating systems.
- Flexibility in
scheduling has many advantages to offer such as the cso (computer system
orchestration) of the process priorities becomes wider.
- But a typical real time OS dedicates itself to a small number of applications at a time.
- There are
2 key factors in any real –time OS namely:
- Minimal interrupt latency and
- Minimal thread switching latency.
- Two types of design
philosophies are followed in designing the real time Oss:
- Time sharing design: As per
this design, the tasks are switched based up on a clocked interrupt and events
at regular intervals. This is also termed as the round robin scheduling.
- Event – driven design: As per
this design, the switching occurs only when some other event demands
higher priority. This is why it is also termed as priority scheduling or
preemptive priority.
- In
the former designs, the tasks are switched more frequently than what is strictly
required but it proves to be good at providing a smooth multi – tasking
experience.
- This gives the user an illusion that he/ she is solely using the
machine.
- The earlier designs of CPU forced us to have several cycles for
switching a task and while switching it could not perform any other task.
- This
was the reason why the early operating systems avoided unnecessary switching in
order to save the CPU time.
- Typically, in any design there are 3 states of a
task:
- Running or executing on CPU
- Ready to be executed
- Waiting or blocked for some
event
- Many
of the tasks are kept in the second and third states because at a time the CPU
can perform only one task.
- The number of tasks waiting to be executed in the
ready queue may vary depending on the running applications and the scheduler type
being used by the CPU.
- On multi – tasking systems that are non – preemptive,
one task might have to give up its CPU time to let the other tasks to be
executed.
- This leads to a situation called the resource starvation i.e., the
number of tasks to be executed is more and the resources are less.
No comments:
Post a Comment