Subscribe by Email


Showing posts with label program. Show all posts
Showing posts with label program. Show all posts

Sunday, July 14, 2013

What is Polling?

- Polling is often referred to as the polled operation.
- When the statuses of the external devices are actively sampled by a client program just like a synchronous activity is referred to as the polling. 
- The common use of the polling is in the input and output operations. 
- In rare cases, polling is also called as the software driven I/O or just simply as polled I/O. 
- As and when required, polling is also carried out with the busy waiting synonymous. 
- Polling is then referred to as the busy–wait polling. 
- In this case whenever it is required to carry out an input/ output operation, the system just checks the status of the device required for fulfilling this operation until it is idle. 
- When it becomes idle it is accessed by the I/O operation. 
- Such polling may also refer to a state in which the status of the device is checked again and again for accessing it if idle. 
- If the device is occupied, the system is forced to return to some other pending task. 
- In this case the CPU time is wasted less when compared to what happens in busy waiting. 
- However, this is not a better alternative to interrupt driven I/O polling. 
- In single purpose systems that are too simple, using busy-wait polling is perfectly fine if the system cannot take any action until the I/O device has been accessed. 
- But traditionally, the polling was thought to be a consequence of the operating systems and simple hardware that do not support multitasking. 
- The polling works intimately with the low level hardware usually. 
- For example, a parallel printer port can be polled for checking whether or not it is ready for printing another character. 
- This involves just the examination of a bit. 
- The bit to be examined represents the high or low voltage stage of the single wire in the cable of the printer during the time of reading. 
- The I/O instruction by which this byte is read is also responsible for transferring the voltage state directly to the eight flip flops or circuits. 
- These 8 flip flops together constitute one byte of a register of CPU. 

Polling also has a number of disadvantages. 
- One is that there is limited time for servicing the I/O devices. 
- Polling has to be done within this time period only. 
- But in some cases there are many devices to be checked which cause the polling time to exceed the given limit. 
- The host keeps on hitting the busy bit until the device becomes idle or clear. 
When the device is idle, the state is written in to the command register and also in the data out register. 
- The command ready bit is set to 1. 
- The controller sets the busy bit once it knows that the command ready bit has been set.  
- After reading from the command register, the controller carries out the required I/O operation on the device. 
- On the other hand, if the read bit has been set to one, the controller loads the device data in to the data in register. 
- This data is further read by the host. 
- Once the whole action has been completed, the command ready bit is cleared by the controller. 
- The error bit is also cleared for showing that the operation has been completed successfully. 
- At the end the busy bit is also set.
- Polling can be seen in the terms of master slave scenario where the master sends inquiring about the working status slave devices i.e., whether they are clear or engaged. 


Tuesday, May 28, 2013

Concept of page fault in memory management

Page fault is also known as the pf or #pf and can be thought of as a trap that the hardware raises for the software whenever the program tries to access a page that has been mapped to an address space in the virtual memory but has not been loaded in the main memory. 

In most cases, the page fault is handled by the operating system by helping in accessing the required page at an address space in the main or the physical memory or sometimes by terminating the program if it makes an illegal attempt to the access the page.

- Memory management unit is the hardware that is responsible for detecting the page faults and is located in the processor. 
- The software that helps the memory management unit in handling the page faults is the exception handling software and is seen as a part of the OS. 
- ‘Page fault’ is not always an error.
- These are often seen as a necessary role player in increasing the memory. 
- This can be made available to the software applications that makes use of the virtual memory of  the operating system for execution.
- Hard fault is the term used by the Microsoft instead of page fault in the resource monitor’s latest versions.

Classification of Page Faults

Page faults can be classified in to three categories namely:

1. Minor: 
- This type of fault is also called the soft page fault and is said to occur when the loading of the page in to the memory takes place at the time of the fault generation, but the memory management unit does not mark it as being loaded in the physical memory. 
- A page fault handler is included in the operating system whose duty is to make an entry for the page that is pointed to by the memory management unit. 
- After making the entry for it, its task is to give an indication that the page has been loaded. 
- However, it is not necessary that the page must be read in to the memory. 
This is possible if the different programs share the memory and the page has been loaded in to the memory for the various applications. 
- In the operating systems that apply the technique of secondary page caching, the page can be removed from the working set of the process but not deleted or written to the disk.

2. Major: 
- Major fault is actually a fault that many operating systems use for increasing the memory for the program that must be available as demanded by the program. 
- The loading of the parts of the program is delayed by the operating system from the disk until an attempt is made by the program for using it and generating the page fault.
- In this case either a non – free page or a page in the memory has to be found by the page fault handler. 
- When the page is available, the data from it can be read by the operating system to the new page in the main memory, thus easily making an entry for the required page.

3. Invalid: 
- This type of fault occurs whenever a reference is made to an address that does not exists in the virtual address space and therefore it has no page corresponding to it in the memory. 
- Then the code by which the reference was made has to be terminated by the page fault handler and give an indication regarding the invalid reference. 


Friday, May 17, 2013

Define a process? What are sequential and concurrent processes?


- Each and every task which we order our computer to carry out is accomplished by a set of processes. 
- It is these processes that actually run the program. 
- A process can be defined as an instance of the program that is currently being executed. 
- A program’s current activity and the code being executed are stored in the process itself. 
- However, it depends on the operating system that the process is to constitute of multiple threads for a concurrent execution or just one thread for sequential execution. 

This gives rise to two different types of processes namely:

Sequential processes 
Sequential processes can be executed on the same processor but the concurrent processors however may sometimes require more than one processor.

Concurrent processes
Concurrent processes are executed in parallel to each other and at the same whereas the sequential processes go step by step executing one instruction at a time. 

Concepts of Process

- A computer program can be defined as a set of passive instructions. When these instructions are actually executed, they form a process. 
- The same program may have a number of processes associated with it. 
Multiple processes can be executed by sharing the processors and the resources. 
- If this is done, it is called multitasking. 
- Each processor takes up a single task. 
- With multitasking, switching between the different tasks becomes possible for the processor and so the processes won’t have to wait for long. 
- However, it depends entirely on the operating system when the switch has to be performed:
  1. When the task is performing I/O operations or
  2. When the task itself indicates that it can now be switched or
  3. On hardware interrupts.
- Time sharing is a common type of multitasking and allows interactive user applications to response quickly. 
- In systems using the time sharing systems, switching is done quite rapidly. 
This gives an illusion of the simultaneous execution of the multiple processes by the same processor. 
- Such type of execution is termed as concurrency.
- Direct communication that may take place between independent processes is avoided by many of the modern operating systems for the reasons of maintaining reliability and security. 
- The inter-process communication functionality is kept under strict control and mediation. 
- In general, the following resources are said to constitute a process:
Ø  Executable machine code’s image associated with the task.
Ø  Some part of virtual memory that is inclusive of the process specific data, executable code, heap and a call stack. Heap is for holding the immediate data generated during the execution and the call stack is for keeping a track of the subroutines.
Ø OS descriptors belonging to the resources that were allocated to the processes. These descriptors may be data sources, sinks and file descriptors etc.
Ø  Security attributes including the set of permissions for the process and the owner of the process.
Ø  Processor state like physical memory addressing or register contents etc. registers store the state during the execution of the process or otherwise it is stored in memory.

- Most of this information is held in the process control blocks regarding the active processes. 
- The operating system makes it a point to maintain a separation between its resources and allocate them the requested resources so that they won’t interfere with the working of each other and thus won’t cause any system failures such as thrashing or deadlocks. 
- But processes do require communicating with each other. 
- For such cases to make interaction safe, operating system has mechanisms especially for the inter-process communication.




Tuesday, May 7, 2013

What is meant by Time sharing system?


In the field of computer science, sharing resources of a computer through techniques of multi-tasking and multi-programming by many other system users is termed as a time sharing system. 
- It was first introduced in the year of 1960 and eventually emerged as the most popular computing model of the 1970s. 
- With it, occurred a major shift in the technology of designing the efficient computers. 
- These types of systems allowed quite a large number of users for interacting with the same computer system at the same time. 
- Providing computing capabilities was a costly affair at that time. 
- Time sharing greatly brought down this cost by providing these capabilities at a very less cost. 
- Since time sharing allows multiple users to interact simultaneously with the same system, it has actually made it possible for the organizations and the individuals to use a system that they do not even own. 
- This has further led to the promotion of the computers to be used interactively and development of other applications with an interactive interface. 
- The earlier systems apart from being expensive were quite slow. 
- This was the reason why the systems could be dedicated only to one task at a time. 
- The task was carried out through the control panels from where the operator would enter small programs manually through switches so as to load and execute a new program series. 
- These programs would take even up to weeks for completing execution. 
- The realization of the interaction pattern was what that led to the development of time sharing systems. 
- Usually, the data entered by a single user was in small bursts of info and then a long pause. 
- But if there would have been multiple number of users working concurrently on the same system, there activities could fill up the pauses of the single user. 
The overall process could be made very efficient for a given size of the use group. 
- In the same way, the slice or share of time that was engaged in waiting for network input or tape or disk could be utilized by activities of other users. 
- A system that would be able to harness this potential advantage was difficult to be implemented.
- Even though batch processing was a high at that time, it could only make use of the time delay between two programs. 
- In the early times, the multiplexing of computer terminals in to main frame computer systems was seen.
- Such implementations were capable of sequentially polling those terminals to check for additional action and data requests made by the user of the system.

- Later, came the interconnection technology that was interrupt driven and made use of the IEEE 488 i.e., parallel data transfer technologies.
- Time sharing faded for some time with the advent of the micro computing but again it came back in to the scene with the rise of internet. 
- The corporate server farms cost in millions and are capable of hosting a large number of customers sharing the same resources.
- The operation of the websites using the early serial terminals was in bursts of activity that were followed by idle periods. 
- However, it is because of this bursting that the services of the web sites could be used by a large number of users simultaneously and with the advantage that the delays in communications won’t be noticed by them.
- However, if the server gets too damn busy they will surely start coming in to the notice.
- Earlier some time sharing services such as the service bureaus were offered by many companies. 
- Some examples of common systems that are used for time sharing are:
  1. SDS 940
  2. PDP – 10
  3. IBM 360


Saturday, May 4, 2013

What is Context Switch?


- The context switch refers to the process that involves storing and restoring of the context or the state of the process. 
- This makes it possible to resume the execution of the process from that same saved point in the future. 
- This is very important as it has enabled the various processes for sharing one CPU and therefore it represents one of the essential features of an operating system that is capable of multi – tasking. 
- It is the operating system and the processors which decide what will constitute the context. 
- One of the major characteristic of the context switches is that they are computationally very intensive.
- Most of the designing of the operating systems is concerned with the optimization of the use of these switches. 
- A finite amount of time is required for switching from one process to another one. 
- This time is spent in the administration of the process which includes saving and loading of the memory maps, registers etc. plus the various lists and tables are updated. 
- A context switch may mean either of the following:
Ø  A register context switch
Ø  A task context switch
Ø  A thread context switch
Ø  A process context switch

Potential Triggers for a Context Switch

There are three potential triggers for a context switch. A switch can be triggered in any of the three conditions:

1. Multi-tasking: 
- It is common that one process has to be switched out of the processor so as to execute another process. 
- This is done by the use of some scheduling scheme. 
- Here, if the process makes itself un-executable, then it can trigger this context switch. 
- The process can do this by waiting for synchronization or an I/O operation to finish. 
- On a multitasking system that uses pre-emptive scheduling, the processes that are still executable might be switched out by the scheduler. 
- A timer interrupt is employed by some of the preemptive schedulers to avoid process starving of the CPU time.
- This interrupt gets triggered when the time slice is exceeded by the process. - Furthermore, this interrupt makes sure that the scheduler will be able to gain control for switching.

2. Interrupt handling: 
- Modern architectures are driven by the interrupts. 
- This implies that the CPU can issue the request while continuing with some other execution and without waiting for the current read/ write operation to get over. 
- When the currently executing operation is over, the interrupt fires and presents the result to the CPU. 
- Interrupt handler is used for handling the interrupts. 
- The interrupts are handled by this program directly from the disk. 
- A part of the context is automatically switched by the hardware up on the occurrence of an interrupt. 
- This context is enough for the handling program to go back to the code that raised the interrupt.
- The additional context might be saved by the handler as per the details of both the software and hardware designs. 
- Usually, only a required small context’s part is changed so as to keep the amount of time required for handling as minimum as possible. 
- Kernel does not take part in scheduling a process that would handle the interrupts.

3. User and kernel mode switching: 
- A context switch is not required for making a transition between the kernel mode and the user mode in an operating system.
- A mode transition in itself cannot be considered to be a context switch. 
- But, it depends on the OS whether or not the context switch will take place. 


Friday, May 3, 2013

What is a Dispatcher?


A number of types of schedulers are available that suit the different needs of different operating systems. Presently, there are three categories of the schedulers:
  1. Long-term schedulers
  2. Medium-term schedulers
  3. Short-term schedulers
Apart from the schedulers there is one more component involved in the scheduling process and is known as the dispatcher. 
- It is the dispatcher that gives a process power to control the CPU. 
- To which process this control is to be given is selected by the short-term scheduler. 
- This whole process involves the following three steps:
  1. Switching the context
  2. Turning on the user code
  3. Making a jump to the exact location of the program from where it has to be restarted.
- Values taken from the program counter are analyzed by the dispatcher and accordingly it fetches instructions and feeds data in to the registers. 
- The dispatcher unlike the other system components needs to be very quick since it is invoked during all the switches that occur. 
- Whenever a context switch is invoked, the processor gets in to an idle state for a very small period of time. 
- Hence, it is required that the context switches that are not necessary might be avoided. 
- The dispatcher takes some time for stopping one process and start running the other one. 
- The dispatch latency is what we call this time.

- Scheduling and dispatch are complex processes and interrelation to each other. 
- These two are very much essential for the operation of the operating system. 
Today, architectural extensions are available for the modern processors that provide a number of banks of registers.
- Swapping of these registers in hardware is possible and therefore a certain number of tasks are capable of retaining their full registers. 
- Whenever an interrupt triggers the dispatcher, it sends to it the full set of the registers belonging to the process that was being executed at the time of occurrence of the interrupt. 
- Here, the program counter is not considered. 
- Therefore, it is important that the dispatcher should be written carefully for storing the present states of the registers on being triggered. 
- In other words, we can say that for the dispatcher itself, there is no immediate context. 
- This saves it from the same problem. 

Process of Dispatcher

Below we try to describe in simple words what actually the process is.
  1. The program presently having the context is executed by the processor. Things used by this program include stack base, flags, program counter, and registers and so on. There is a possible exception of the reserved register that is native to the operating system. The executing program does not have any knowledge regarding the dispatcher.
  2. For dispatcher a timed interrupt is triggered. Here the program counter jumps to the address listed in the BIOS interrupt. This marks the beginning of the execution of the dispatch sub routine. The dispatcher then deals with the stacks and the registers etc. of the program that raised the interrupt.
  3. Dispatcher like the other programs consists of some sets of instructions that operate up on the register of the current program. These instructions know everything of the previously executed programs. Out of these, the first few instructions are responsible for storing the state of the program.
  4. Dispatcher next determines which program should be given the CPU next for executing. Now it deletes all the statistics of the previously executed state and fills in the details of the next process to be executed.
  5. Dispatcher jumps to the address mentioned in the program counter and establishes a full context on the processor.
- Actually dispatcher does not really require registers since its only work is to write the current state of the CPU in to a memory location that has been predetermined. 
- It then loads in to the CPU another process from other predetermined location. 


Sunday, April 28, 2013

What is fragmentation? What are different types of fragmentation?


In the field of computer science, the fragmentation is an important factor concerning the performance of the system. It has a great role to play in bringing the performance of the computers. 

What is Fragmentation?

- It can be defined as a phenomenon involving the inefficient use of the storage space that in turn reduces the capacity of the system and also brings down its performance.  
- This phenomenon leads to the wastage of the memory and the term itself means the ‘wasted space’.
- Fragmentation is of three different forms as mentioned below:
  1. The external fragmentation
  2. Internal fragmentation and
  3. Data fragmentation
- All these forms of fragmentation might be present in conjunction with each other or in isolation. 
- In some cases, the fragmentation might be accepted in exchange of simplicity and speed of the system. 

Basic principle behind the fragmentation concept. 
- The CPU allocates the memory in form of blocks or chunks whenever requested by some computer program. 
- When this program has finished executing, the allocated chunk can be returned back to the system memory. 
- The size of memory chunk required by every program varies.
- In its lifetime, a program may request any number of memory chunks and free them after use. 
- When a program begins with its execution, the memory areas that are free to allocated, are contiguous and long. 
- After prolonged usage, these contiguous memory locations get fragmented in to smaller parts. 
- Later, a stage comes when it becomes almost impossible to serve the large memory demands of the program. 

Types of Fragmentation


1.External Fragmentation: 
- This type of fragmentation occurs when the available memory is divided in to smaller blocks and then interspersed. 
- Certain memory allocation algorithms have a minus point that they are at times unable to order the memory used by the programs in such a way that its wastage is minimized. 
- This leads to an undesired situation where even though we have free memory, it cannot be used effectively since being divided in to very small parts that alone cannot satisfy the memory demands of the programs.  
- Since here, the unusable storage lies outside the allocated memory regions, this type of fragmentation is called external fragmentation. 
- This type of fragmentation is also very common in file systems since here many files with different sizes are created as well as deleted. 
- This has a worse effect if the file deleted was in many small pieces. 
- This is so because this leaves similar small free memory chunks which might be of no use.

2. Internal Fragmentation: 
- There are certain rules that govern the process of memory allocation. 
- This leads to the allocation of more computer memory what is required. 
- For example, as the rule memory that is allocated to programs should be divisible by 4, 8 or 16. So if some program actually requires 19 bytes, it gets 20 bytes. 
- This leads to the wastage of extra 1 byte of memory. 
- In this case, this memory becomes unusable and is contained in the allocated region itself and therefore this type of fragmentation is called as the internal fragmentation.
- In computer forensic investigation, the slack space is the most useful source for evidence. 
- However, it is often difficult to reclaim the internal fragmentation. 
- Making a change in the design is the most effective way for preventing it. 
Memory pools in dynamic memory allocation are the most effective methods for cutting down the internal fragmentation. 
- In this the space overhead is spread by a large number of objects.

3. Data Fragmentation: 
This occurs because of breaking up of the data in many pieces that lie far enough from each other.
                                                                                                               


Friday, April 26, 2013

What is the cause of thrashing? How does the system detect thrashing? Once it detects thrashing, what can the system do to eliminate this problem?


- Thrashing takes place when the sub-system of the virtual memory of the computer system is involved in a state of paging constantly.
- It rapidly exchanges data in the memory with the data available on the disk excluding level of processing of most of the applications. 
- Thrashing leads to the degradation of the performance of the computer or may even cause it to collapse. 
- The problem may further worsen until the issue is identified and addressed. 
- If there are not enough pages available for the job, it becomes very likely that your system will suffer from thrashing since it’s an activity involving high paging. 
- This also leads to high rate of page fault. 
- This in turn cuts down the utilization of the CPU. 
- Modern systems utilize the concept of the paging system for executing many programs.
- However, this is what makes them prone to thrashing. 
- But this occurs only if the system does not have at present sufficient memory as required by the application or if the disk access time is too long. 

- Thrashing is also quite common in the communication systems where the conflicts concerning the internal bus access is common. 
- The order of magnitude or degree by which the latency and throughput of a system might degrade depends up on the algorithms and the configuration that is being used. 
- In systems making use of virtual memory systems, workloads and programs presenting insufficient locality of reference may lead to thrashing. 
- Thrashing occurs when the physical memory of the system is not able to contain in itself the workload or the program. 
- Thrashing can also be called as the constant data swapping.
- Older systems were low end computers i.e., the RAM they had was insufficient to be employed in modern usage patterns. 
- Thus, when their memory was increased they became noticeably faster. 
- This happened because of the availability of more memory which reduce the amount of swapping and thus increased the processing speed. 
- IBM system/ 370 (mainframe computer) faced this kind of situation. 
- In it a certain instruction consisted of an execute instruction pointing over to another move instruction. 
- Both of these instructions crossed the page boundary and also the source from which the data has to be moved and the destination where it was to be placed both crossed the page boundary. 
- Thus, this particular instruction altogether required 8 pages and that too at the same time in memory. 
- Now if the operating system allocated less than 8 pages, a page fault is sure to occur. 
- This page fault will lead to thrashing of all the attempts of restarting the failing instruction. 
- This may even reduce the CPU utilization to almost zero!

How can a system handle thrashing?

For resolving the problem of thrashing, the following things can be done:
1. Increasing the amount of main memory i.e., the RAM in the system. This is the best ever solution for this and will be helpful for a long term also.
2. Decreasing the number of programs to be executed by the system.
3. Replacing the programs that utilize heavy memory with their less memory utilizing equivalents.
4. Making improvements in the spatial locality.

- Thrashing can also occur in cache memory i.e., the faster storage space that is used for speeding up the data access. 
- Then it is called cache thrashing. 
- It occurs when the cache is accessed in a way that it leaves it of no benefit. 
When this happens many main memory locations compete with each other for getting the same cache lines that it turn leads to a large number of cache misses.


Wednesday, April 24, 2013

What is multi-tasking, multi-programming and multi-threading?


When it comes to computing, there are 3 important tasks that are inter-related concepts namely multi-programming, multitasking and multi-threading. 

What is Multitasking?

- This has actually emerged out of the need of multitasking since while the system performed one task a lot of time was wasted. 
- As their needs grew,people wanted the computer to perform many tasks at the same time. Multi-tasking is what we call it. 
- Here, multiple tasks or processes are carried out simultaneously.
- The common processing resources i.e., the main memory and the CPU are shared by these processes. 
- If the system has only one CPU to work with, then it can only run one task at a time. 
- Such systems seek to multi-task by scheduling all the processes required to be carried out. 
- It runs one task and the other one waits in the pipeline.
The CPU is reassigned to all the tasks turn by turn and this is termed as a context switch. 
- When this happens very frequently, it gives an illusion that the processes are being executed in parallel. 
- There are other systems called multi-processor machines which have more than one CPU and can perform a number of tasks greater than the number of CPUs. 
- There are a number of scheduling strategies that might be adopted by the operating systems and they are:
Ø  Multi – programming
Ø  Time – sharing
Ø  Real – time systems

What is Multi-Programming?

- Earlier we had very slow peripheral devices and therefore the CPU time was a luxury and so expensive. 
- Whenever a program was being executed for accessing a peripheral, the CPU was to keep waiting for the peripheral to finish with processing the data. 
- It is very inefficient. 
- Then came the concept of multi–programming which was a very good solution. 
-  When the program reached the waiting status, its context was stored in the memory and the CPU was given some other program to execute. 
- This processing continued till all the processes at hand were completed. 
- Later,developments such as VMT or virtual machine technology and virtual memory greatly increased the efficiency of the multi – programming systems. 
With these two technologies the programs were able to make use of the OS and the memory resources just as they were being used by the currently executing programs. 
- However, there is one drawback with multi–programming which is that is does not guarantees that all programs will be executed in a timely manner. 
- But then also it was of a great help in processing multiple batches of programs.

What is Multi-threading?

 
- With multi–tasking a great improvement was seen in the throughput of the computer systems. 
- So programmers found themselves implementing programs in sets of cooperating processes.
- Here, all the processes were assigned different tasks like one would take input, other one would process it and a third one would write the output to the display. 
- But for this, there was a requirement of tools that allowed an efficient exchange of the data.
- Threads were an outcome of the idea that the processes can be made to cooperate efficiently if their memory space is shared.
- Therefore, threads can be defined as the processing running in a memory context that is same for all. 
- These threads are said to be light – weight since there is no need for a change of memory context for switching between them. 
- The scheduling followed here is of the preemptively. 


Facebook activity