Subscribe by Email


Showing posts with label Processor. Show all posts
Showing posts with label Processor. Show all posts

Friday, September 13, 2013

What is Portability Testing?

- Portability Testing is the testing of a software/component/application to determine the ease with which it can be moved from one machine platform to another. 
- In other words, it’s a process to verify the extent to which that software implementation will be processed the same way by different processors as the one it was developed on.  
- It can also be understood as amount of work done or efforts made in order to move software from one environment to another without making any changes or modifications to the source code but in real world this is seldom possible.
For example, moving a computer application from Windows XP environment to Windows 7 environment, thereby measuring the efforts and time required to make the move and hence determining whether it is re usable with ease or not.

- Portability testing is also considered to be one of the sub parts of System testing as this covers the complete testing of software and also it’s re-usability over different computer environments that include different Operating systems, web browsers.

What needs to be done before Portability testing is performed (pre requisites/pre conditions)? 
1.   Keep in mind portability requirements before designing and coding of software.
2.   Unit and Integration Testing must have been performed.
3.   Test environment has been set up.

Objectives of Portability Testing
  1. To validate the system partially i.e. to determine if the system under consideration fulfills the portability requirements and can be ported to environments with different :-
a). RAM and disk space
b). Processor and Processor speed
c). Screen resolution
d). Operating system and its version in use.
e). Browser and its version in use.
To ensure that the look and feel of the web pages is similar and functional in the various browser types and their versions.

2.   To identify the causes of failures regarding the portability requirements, this in turn helps in identifying the flaws that were not found during unit and integration testing.
3.   The failures must be reported to the development teams so that the associated flaws can be fixed.
4.   To determine the potential or extent to which the software is ready for launch.
5.   Help in providing project status metrics (e.g., percentage of use case paths that were successfully tested).
6.   To provide input to the defect trend analysis effort.



Wednesday, June 19, 2013

Explain the Priority CPU scheduling algorithm

A number of scheduling algorithms are available today and all are appropriate for some different kinds of scheduling environments. In this article we give a brief explanation about the ‘priority CPU scheduling algorithm’. 

For those who are not familiar with this scheduling algorithm, a special case of the priority algorithm is the shortest job first scheduling algorithm (SJF). 

- This algorithm involves associating a priority with each and every thread or process. 
- Out of all the processes, the one with the highest priority is chosen and given to the processor for execution. 
- Thus, it is decided by the priority that which process has to be executed first. 
There are cases when the two or more processes might have the same priority. 
- In such case FCFS (first come first served) scheduling algorithm is applied. 
The process first in the queue is then executed first. 
- The SJF is essentially a modification of the priority algorithm. 
- Here, the priority of a process (indicated by p) is simply taken as the inverse of the following CPU burst as predicted. 
- This implies if a process is having a large CPU burst, then its priority will be low accordingly and similarly if the CPU burst is small, the priority will be high. 
Numbers in some fixed range are used for indicating the priorities such as from 0 to 4095 or from 0 to 7 etc. 
- One thing to be noted is that there has been no general agreement up on whether the number 0 indicates lowest priority or highest priority.
- In some systems the lower priorities are indicated by the low numbers while in some systems low numbers mean higher priorities. 
- The latter case i.e., using low numbers for representing high priorities is more common.
For example, consider the 5 processes P1, P2, P3, P4 and P5 having CPU burst as 10, 1, 2, 1, 5 respectively and priority also respectively as 3, 1, 4, 5, 2. Using the priority scheduling algorithms, the processes will be executed in the following order:
P2, P5, P1, P3, P4

There are two ways of defining the priorities i.e., either externally or internally. This gives two types of priorities:

  1. Internally defined priorities: These priorities make use of some quantities that can be measured for computing a process’s priority. These quantities include memory requirements, time limits, ration of the I/O burst and CPU burst, number of files and so on.
  2. Externally defined priorities: These priorities are defined by some criteria that are external to the operating system. Such factors include political factors, department leading the work; importance of the process, amount of money paid and so on.
The priority scheduling can be itself divided in to two types namely non – preemptive or preemptive. The priority of the process waiting in the ready queue is compared with that of the executing process.

Ø  Preemptive priority scheduling: Here the CPU is preempted if the waiting process has a priority higher than that of the currently executing process.
Ø  Non – preemptive priority scheduling: Here the new process will be let waiting in the ready queue till the execution of the current process is complete.


Starvation or the indefinite blocking presents a major problem in priority scheduling. A process will be considered blocked if it is ready for execution but has to wait for CPU. It is very likely that the low priority processes will be left waiting indefinitely for CPU. In a system that is heavily loaded most of the time, if the number of high priority processes is large, the low priority processes will be prevented from getting processor. 


Monday, June 17, 2013

Explain the Round Robin CPU scheduling algorithm

There are number of CPU scheduling algorithms, all having different properties thus making them appropriate for different conditions. 
- Round robin scheduling algorithm or RR is commonly used in the time sharing systems. 
- This is the most appropriate scheduling algorithm for time sharing operating systems. 
- This algorithm shares many similarities with the FCFS scheduling algorithm but there is an additional feature to it. 
- This feature is preemption in the context switch occurring between two processes. 
- In this algorithm a small unit of time is defined and is termed as the time slice or the time quantum. 
- These time slices or quantum range from 10ms to 100 ms.
- The ready queue in round robin scheduling is implemented in a circular fashion. 

How to implement Round Robin CPU scheduling algorithm

Now we shall see about the implementation of the round robin scheduling:
  1. The ready queue is maintained as the FIFO (first in first out) queue of the processes.
  2. Addition of new processes is made at the rear end of the ready queue and selection of the process for execution by the processor is made at the front end.
  3. The process first in the ready queue is thus picked by the CPU scheduler. A timer is set that will interrupt the processor when the time slice elapses. When this happens the process will be dispatched.
  4. In some cases the CPU burst of some processes may be less than the size of the time slice. If this is the case, the process will be voluntarily released by the CPU. The scheduler will then jump to the process next in the ready queue and fetch it for execution.
  5. While in other cases the CPU burst for some processes might be higher than the size of the time slice. In this case the timer set will send an interrupt to the processor, thus dispatching the process and putting it at the rear end of the ready queue. The scheduler will then jump to the next process in the queue.
The waiting time in round robin scheduling algorithm has been observed to be quite long at an average rate. 
- In this algorithm, not more than one time slice can be allocated to any process under any conditions in a row. 
- However, there is an exception to this if there is only one process to be executed.
- If the CPU burst is exceeded by the process, the process is put back at the tail of the queue after preemption.
- Thus, we can call this algorithm as a preemptive algorithm also. 
- The size of the time quantum greatly affects the performance of the round robin algorithm.
- If the size of the time quantum is kept too large then it resembles the FCFS algorithm. 
- On the other hand if the quantum is of too small size, then this RR approach is called the processor sharing approach. 
- An illusion is created in which it seems every process has its own processor that runs at the fraction of the speed of the actual processor. 
- Further, the effect of the context switching up on the performance of the RR scheduling algorithm.
- A certain amount of time is utilized in switching from one process to another. 
In this time the registers and the memory maps are loaded, a number of lists and tables are updated; memory cache is flushed and reloaded etc.
- Lesser the size of the time quantum, context switching will occur more number of times. 


Thursday, June 6, 2013

Explain the structure of the operating systems.

We all are addicted to using computers but we all never really bother to known what is actually there inside it i.e., who is operating the whole system. Then something inevitable occurs. Your computer system crashes and the machine is not able to boot. Then you call a software engineer and he tells you that the operating system of the computer has to be reloaded. You are of course familiar with the term operating system but you know what it is exactly. 

About Operating System

- Operating system is the software that actually gives life to the machine. 
- Basic intelligence is the requirement of every computer system to start with. 
Unlike we humans, computers do not have any inborn intelligence. 
- This basic intelligence is required because this is what the system will use to provide essential services for running the programs such as providing access to various peripherals, using the processor and allocation of memory and so on. 
One type of service is also provided by the computer system for the users. 
- As a user, you may require to create, copy or delete files. 
- This is the system that manages the hardware of the computer system. 
- It also sets up a proper environment in which the programs can be executed. 
It is actually an interface between the software and the hardware of the system.
- On booting of the computer, the operating system is loaded in to the main memory. 
- This OS remains active as long as the system is running. 

Structure of Operating Systems

- There are several components of the operating system about which we shall discuss in this article.
- These components make up the structure of the operating system.

1. Communications: 
- Information and data might be exchanged by the processes within the same computer or different computers via a network. 
- This information might be shared via memory if in the same computer system or via message passing if through some computer network. 
- In message passing, the messages are moved by the operating system.

2. Error detection: 
- The operating system has to be alert about all the possible errors that might occur. 
- These errors may occur anywhere ranging from CPU to memory hardware devices in the peripheral devices in the user application. 
- For all types of error, proper action must be taken by the operating system for ensuring that correct and consistent computing takes place. 
- The users and the abilities of the programmers are enhanced greatly by the debugging facilities.

3. Resource allocation: 
- Resources have to be allocated to all of the processes running. 
- A number of resources such as the main memory, file storage, CPU cycles etc have some special allocation code while other resources such as I/O devices may have request and release codes.

4. Accounting: 
- This component is responsible for keeping the track of the computer resources being used and released.

5. Protection and Security: 
- The owners of data and information might want it to be protected and secured against theft and accidental modification.
- Above all, there should be no interference of the processes with working of each other. 
- The protection aspect involves controlling the access to all the resources of the system. 
- Security involves ensuring safety concerning user authentication in order to prevent devices from invalid attempts.

6. Command line interface or CLI: 
- This is the command interpreter that allows for the direct entry of the command. 
- This is either implemented by systems program or by the kernel.
- There are a number of shells also for multiple implementations.

7. Graphical User Interface: 
This is the interface via which the user is actually able to interact with the hardware of the system. 


Tuesday, May 28, 2013

Concept of page fault in memory management

Page fault is also known as the pf or #pf and can be thought of as a trap that the hardware raises for the software whenever the program tries to access a page that has been mapped to an address space in the virtual memory but has not been loaded in the main memory. 

In most cases, the page fault is handled by the operating system by helping in accessing the required page at an address space in the main or the physical memory or sometimes by terminating the program if it makes an illegal attempt to the access the page.

- Memory management unit is the hardware that is responsible for detecting the page faults and is located in the processor. 
- The software that helps the memory management unit in handling the page faults is the exception handling software and is seen as a part of the OS. 
- ‘Page fault’ is not always an error.
- These are often seen as a necessary role player in increasing the memory. 
- This can be made available to the software applications that makes use of the virtual memory of  the operating system for execution.
- Hard fault is the term used by the Microsoft instead of page fault in the resource monitor’s latest versions.

Classification of Page Faults

Page faults can be classified in to three categories namely:

1. Minor: 
- This type of fault is also called the soft page fault and is said to occur when the loading of the page in to the memory takes place at the time of the fault generation, but the memory management unit does not mark it as being loaded in the physical memory. 
- A page fault handler is included in the operating system whose duty is to make an entry for the page that is pointed to by the memory management unit. 
- After making the entry for it, its task is to give an indication that the page has been loaded. 
- However, it is not necessary that the page must be read in to the memory. 
This is possible if the different programs share the memory and the page has been loaded in to the memory for the various applications. 
- In the operating systems that apply the technique of secondary page caching, the page can be removed from the working set of the process but not deleted or written to the disk.

2. Major: 
- Major fault is actually a fault that many operating systems use for increasing the memory for the program that must be available as demanded by the program. 
- The loading of the parts of the program is delayed by the operating system from the disk until an attempt is made by the program for using it and generating the page fault.
- In this case either a non – free page or a page in the memory has to be found by the page fault handler. 
- When the page is available, the data from it can be read by the operating system to the new page in the main memory, thus easily making an entry for the required page.

3. Invalid: 
- This type of fault occurs whenever a reference is made to an address that does not exists in the virtual address space and therefore it has no page corresponding to it in the memory. 
- Then the code by which the reference was made has to be terminated by the page fault handler and give an indication regarding the invalid reference. 


Sunday, May 19, 2013

What are different types of schedulers and their workings?


Scheduling is an important part of the working of operating systems. 
- The scheduler is the component that provides access to the resources to the processes, threads and data flows. 
- These resources may include time of the processor and the communications bandwidth. 
- Scheduling is necessary for effectively balancing the load of the system and achieving the target of QoS or quality of service. 
- Scheduling is also necessary for the systems that do multitasking and multiplexing on a single processor since they need to divide the CPU time between many processes. 
- In multiplexing, it is required for timing the simultaneous transmission of the multiple flows.

Important things about Scheduler

There are 3 things which most concern the scheduler:
  1. Throughput
  2. Latency inclusive of the response time and the turnaround time
  3. Waiting time or the fairness time
- But when practically implemented, conflicts arise between these goals for example between latency and throughput. 
- It is the scheduler that can make a compromise between any two goals. 
Based on the user’s requirements and the objectives it is decided to which goal the preference has to be given. 
- In systems such as the embedded systems and robotics that operate in real time environment, it has to be ensured by the scheduler that the processes are capable of meeting the deadlines. 
- This is a very critical factor in maintaining the stability of the system. 
- The administrative back end is used for managing the scheduled tasks that are then sent to the mobile devices.  

Types of Schedulers

There are 3 different types of schedulers available which we discuss below:

Long term Schedulers or Admission Schedulers: 
- The purpose of this type of scheduler is to decide about the processes and jobs to be admitted or added to the ready queue. 
- When a program makes an attempt for executing a process, it is the responsibility of the long – term scheduler to delay or authorize the request for admitting the process to the ready queue. 
- Thus, what all processes will be executed by the system is dictated by this scheduler. 
- It also dictates about the degree of the concurrency and handling of the CPU intensive and I/O intensive processes. 
- Modern operating systems use this for making sure that there is enough time for the processes to finish of their tasks. 
- Modern GUIs would be of very less use if there was no real time scheduling. 
The long term queue resides in the secondary memory.

Medium term Schedulers: 
- This scheduler serves the purpose of removing the processes from the physical memory and placing them in the virtual memory and even vice versa. 
This process is called swapping out and swapping in. 
- A process that has been inactive for some time might be swapped by the scheduler. 
- It may also swap a process with frequent page faulting, low priority or more amount of memory etc. 
- This is necessary since this makes the space available for other processes.

Short term Schedulers: 
- These schedulers are more commonly known as the CPU schedulers.
- It decides which one out of all the processes will be executed after the clock interrupt, a system call, an I/O interrupt, hardware interrupt and so on. 
- Thus, we can say that the frequency of the short term schedulers of making decisions is much higher than that of the long term and medium term schedulers since after every time slice these schedulers have to decide.
There is one more component that is involved in CPU scheduling but is not counted under schedulers. It is called dispatcher. 


Friday, May 3, 2013

What is a Dispatcher?


A number of types of schedulers are available that suit the different needs of different operating systems. Presently, there are three categories of the schedulers:
  1. Long-term schedulers
  2. Medium-term schedulers
  3. Short-term schedulers
Apart from the schedulers there is one more component involved in the scheduling process and is known as the dispatcher. 
- It is the dispatcher that gives a process power to control the CPU. 
- To which process this control is to be given is selected by the short-term scheduler. 
- This whole process involves the following three steps:
  1. Switching the context
  2. Turning on the user code
  3. Making a jump to the exact location of the program from where it has to be restarted.
- Values taken from the program counter are analyzed by the dispatcher and accordingly it fetches instructions and feeds data in to the registers. 
- The dispatcher unlike the other system components needs to be very quick since it is invoked during all the switches that occur. 
- Whenever a context switch is invoked, the processor gets in to an idle state for a very small period of time. 
- Hence, it is required that the context switches that are not necessary might be avoided. 
- The dispatcher takes some time for stopping one process and start running the other one. 
- The dispatch latency is what we call this time.

- Scheduling and dispatch are complex processes and interrelation to each other. 
- These two are very much essential for the operation of the operating system. 
Today, architectural extensions are available for the modern processors that provide a number of banks of registers.
- Swapping of these registers in hardware is possible and therefore a certain number of tasks are capable of retaining their full registers. 
- Whenever an interrupt triggers the dispatcher, it sends to it the full set of the registers belonging to the process that was being executed at the time of occurrence of the interrupt. 
- Here, the program counter is not considered. 
- Therefore, it is important that the dispatcher should be written carefully for storing the present states of the registers on being triggered. 
- In other words, we can say that for the dispatcher itself, there is no immediate context. 
- This saves it from the same problem. 

Process of Dispatcher

Below we try to describe in simple words what actually the process is.
  1. The program presently having the context is executed by the processor. Things used by this program include stack base, flags, program counter, and registers and so on. There is a possible exception of the reserved register that is native to the operating system. The executing program does not have any knowledge regarding the dispatcher.
  2. For dispatcher a timed interrupt is triggered. Here the program counter jumps to the address listed in the BIOS interrupt. This marks the beginning of the execution of the dispatch sub routine. The dispatcher then deals with the stacks and the registers etc. of the program that raised the interrupt.
  3. Dispatcher like the other programs consists of some sets of instructions that operate up on the register of the current program. These instructions know everything of the previously executed programs. Out of these, the first few instructions are responsible for storing the state of the program.
  4. Dispatcher next determines which program should be given the CPU next for executing. Now it deletes all the statistics of the previously executed state and fills in the details of the next process to be executed.
  5. Dispatcher jumps to the address mentioned in the program counter and establishes a full context on the processor.
- Actually dispatcher does not really require registers since its only work is to write the current state of the CPU in to a memory location that has been predetermined. 
- It then loads in to the CPU another process from other predetermined location. 


Monday, April 29, 2013

What is cache memory?


Cache memory is a certain memory aid for computers that speeds them up very well. 
- In cache memory, the storage of the data is transparent so as to make the processing of the future requests faster. 
- A cache might store in it the values that have already computed or duplicate of some values stored somewhere else in the memory. 
- Whenever some data is requested, it is first looked up in the cache memory. - If the data is found here, it is returned to the processor and this is called a ‘cache hit’. 
- In this case the time taken for accessing the data is reduced. 
- This access is thus faster than that of the main memory. 
- Another case is of cache miss when the required data is not found in the cache.
- Then again the data has to be fetched or computed from its original source or the storage location which is slow as obvious. 
- The overall performance of the system increases in proportion with the number of requests that can be served from the cache memory.
- In order to maintain the cost efficiency as well as efficiency in data usage, the size of the cache is kept relatively small as compared to the main memory. 
However, the caches have proven themselves from time to time because of their ability to recognize the patterns of access in the applications having some locality of reference. 
- Temporal locality is exhibited by the references if the data that was previously requested is requested once again.
- These references apart from temporal locality also exhibit spatial locality if the storage location of the requested data is close to the data that was previously requested.

How is cache implemented?

- The cache is implemented as a memory block by the hardware and as a place of temporary storage. 
- Here, only that data is stored which is likely to be accessed again and again. 
Caches are not only used by hard drives and CPUs but also by the web servers and browsers. 
- Pools of entries together make up the cache. 
- Each entry has a datum associated with and a copy of it is stored in the backing store. 
- Each entry is also tagged for the specification of the datum’s identity in the backing store.
- When a datum is required to be accessed by a cache client (it might be an operating system, CPU or web browser.) that it thinks might be available in the backing store, the cache is first checked. 
- If the desired entry is found, it is returned for the use. This is cache hit.
- Similarly, a web browser might look in its local cache available at the disk to see if it has the contents of a web page. 
- In this case the URL serves as the searching tag and the contents are the datum. 
- The rate of successful cache accesses is known as the hit rate of the cache.
- In case of a cache miss, the datum not cached is copied in to the cache so as to prevent future cache misses. 
- For making space for this datum, some already existing datum in the cache is removed. 
- Which datum is to be removed is determined by using the replacement algorithms. 


Sunday, March 24, 2013

What are types of artificial neural networks?


In this article we discuss the types of artificial neural networks. These models simulate the real life biological system of nervous system.
1. Feed forward neural network: 
- This is the simplest type of neural network that has been ever devised. 
- In these networks the information flow is unidirectional; therefore the data moves only in forward direction. 
- From input nodes data flows to the output nodes via hidden nodes (if there are any). 
- In this model there are no loops or cycles. 
- Different types of units can be used for constructing feed forward networks for example, McCulloch – pitts neurons.
- Continuous neurons are used in error back propagation along with the sigmoidal activation.
2. Radial basis function network: 
- For interpolating in a multi – dimensional space radial basis functions are the most powerful tools. 
- These functions can be built in to criterion of distance with respect to some center.
- These functions can be applied in the neural networks. 
- In these networks, sigmoidal hidden layer transfer characteristic can be replaced by these functions.
3. Kohonen self–organization network: 
- Un–supervised learning is performed with the help of self – organizing map or SOM. 
- This map was an invention of Teuvo Kohonen.
- Few neurons learn mapping points in the input space that could not coordinate in the output space. 
- The dimensions and topology of the input space can be different from those of the output space. SOM makes an attempt for preserving these.
4. Learning vector quantization or LVQ: 
- This can also be considered as neural network architecture. 
- This one also was a suggestion of Teuvo Kohonen.  
- In these prototypical representatives are parameterized along with two important things namely, a classification scheme based - up on distance and a distance measure.
5. Recurrent neural network: 
- These networks are somewhat contrary to the feed forward networks. 
- They offer a bi–directional flow of data.
- On a feed forward network data is propagated linearly from input to output. 
- Data from later stages of processing is also transferred to its earlier stages by this network. 
- Sometimes these also double up as the general sequence processors. 
- Recurrent neural networks have a number of types as mentioned below:
Ø  Fully recurrent network
Ø  Hopfield network
Ø  Boltzmann machine
Ø  Simple recurrent networks
Ø  Echo state network
Ø  Long short term memory network
Ø  Bi – directional RNN
Ø  Hierarchical RNN
Ø  Stochastic neural networks
6. Modular neural networks: 
- As per the studies have shown that human brain works actually as a collection of several small networks rather than as just one huge network, this ultimately helped in realizing the modular neural networks where smaller networks cooperate in solving a problem. 
- Modular networks are also of many types such as:
Ø  Committee of machines: Different networks that work together on a given problem are collectively termed as the committee of machines. The result achieved through this kind of networking is quite better than what is achieved with the others. The result is highly stabilized.
Ø  Associative neural network or ASNN: This is an extension of the previous one. And extends a little beyond the weighted average of various models. This one is a combined form of the k- nearest neighbor technique (kNN) and the feed forward neural networks. Its memory is coincident with that of the training set.
7. Physical neural network: 
- It consists of some resistance material that is electrically adjustable and capable of simulating the artificial synapses.
There are other types of ANNs that do not fall in any of the above categories:
Ø  Holographic associative memory
Ø  Instantaneously trained networks
Ø  Spiking neural networks
Ø  Dynamic neural networks
Ø  Cascading neural networks
Ø  Neuro – fuzzy networks
Ø  Compositional pattern producing networks
Ø  One – shot associative memory


Facebook activity