Subscribe by Email


Showing posts with label Implementation. Show all posts
Showing posts with label Implementation. Show all posts

Monday, June 24, 2013

Explain the page replacement algorithms - FIFO, LRU, and Optimal

- Paging is used by most of the computer operating systems for the purpose of virtual memory management. 
- Whenever a page fault occurs, some pages are swapped in and swapped out. Who decides which pages are to be replaced and how? 
- This purpose is served by the page replacement algorithms. 
- Page replacement algorithms only decide which pages are to be page out or written to the disk when a page of the memory has to be allocated. 
- Paging takes place only upon the occurrence of a page fault.
- In such situations a free cannot suffice because either one is not available or because the number of available pages is less than threshold. 
- If a previously paged out page is reference again, then it has to be read in from the disk again. 
- But for doing this, the operating system has to wait for the completion of the input/ output operation. 
- The quality of a page replacement algorithm is denoted by the time it takes for a page in. 
- The lesser it is, the better the algorithm is. 
- The information about the access to page as provided by the hardware is studied by the page replacement algorithm and then it decides which pages should be replaced so that the number of page faults can be minimized. 

In this article we shall see about some of the page replacement algorithms.

FIFO (first – in, first – out): 
- This one being the simplest of all the page replacement algorithms has the lowest overhead and works by book – keeping in place of the OS. 
- All pages are stored in a queue in the memory by the operating system.
- The ones that have recently arrived are kept at the back while the old ones stand at the front end of the queue.
- While making replacement, the oldest page is selected and replaced. 
- Even though this replacement algorithm is cheap as well as intuitive, practically it does not perform well. 
- Therefore, it is rarely used in its original form. 
- VAX/ VMS operating systems make use of the FIFO replacement algorithm after making some modifications. 
- If a limited number of entries are skipped you get a partial second chance. 

Least recently used (LRU): 
- This replacement algorithm bears resemblance in name with the NRU. 
However, difference is there which is that this algorithm follows that the page usage is tracked for a certain period of time. 
- It is based on the idea that the pages being used many times in current set of instructions will be used heavily in the next set of the instructions also. 
- Near optimal performance is provided by LRU algorithm in the theory however in practical it is quite expensive to be implemented. 
- For reducing the costs of the implementation of this algorithm, few implementation methods have been devised. 
- Out of these the linked list method proves to be the costliest. 
- This algorithm is so expensive because it involves moving the items about the memory which is indeed a very time consuming task. 
- There is another method that requires hardware support.

Optimal page replacement algorithm: 
- This is also known as the clairvoyant replacement algorithm. 
- The page that has to be swapped in, it is placed in place of the page which according to the operating system will be used in future after a very long time. - In practical, this algorithm is impossible to be implemented in case of the general purpose OS because approximate time when a page will be used is difficult to be predicted. 


Monday, June 17, 2013

Explain the Round Robin CPU scheduling algorithm

There are number of CPU scheduling algorithms, all having different properties thus making them appropriate for different conditions. 
- Round robin scheduling algorithm or RR is commonly used in the time sharing systems. 
- This is the most appropriate scheduling algorithm for time sharing operating systems. 
- This algorithm shares many similarities with the FCFS scheduling algorithm but there is an additional feature to it. 
- This feature is preemption in the context switch occurring between two processes. 
- In this algorithm a small unit of time is defined and is termed as the time slice or the time quantum. 
- These time slices or quantum range from 10ms to 100 ms.
- The ready queue in round robin scheduling is implemented in a circular fashion. 

How to implement Round Robin CPU scheduling algorithm

Now we shall see about the implementation of the round robin scheduling:
  1. The ready queue is maintained as the FIFO (first in first out) queue of the processes.
  2. Addition of new processes is made at the rear end of the ready queue and selection of the process for execution by the processor is made at the front end.
  3. The process first in the ready queue is thus picked by the CPU scheduler. A timer is set that will interrupt the processor when the time slice elapses. When this happens the process will be dispatched.
  4. In some cases the CPU burst of some processes may be less than the size of the time slice. If this is the case, the process will be voluntarily released by the CPU. The scheduler will then jump to the process next in the ready queue and fetch it for execution.
  5. While in other cases the CPU burst for some processes might be higher than the size of the time slice. In this case the timer set will send an interrupt to the processor, thus dispatching the process and putting it at the rear end of the ready queue. The scheduler will then jump to the next process in the queue.
The waiting time in round robin scheduling algorithm has been observed to be quite long at an average rate. 
- In this algorithm, not more than one time slice can be allocated to any process under any conditions in a row. 
- However, there is an exception to this if there is only one process to be executed.
- If the CPU burst is exceeded by the process, the process is put back at the tail of the queue after preemption.
- Thus, we can call this algorithm as a preemptive algorithm also. 
- The size of the time quantum greatly affects the performance of the round robin algorithm.
- If the size of the time quantum is kept too large then it resembles the FCFS algorithm. 
- On the other hand if the quantum is of too small size, then this RR approach is called the processor sharing approach. 
- An illusion is created in which it seems every process has its own processor that runs at the fraction of the speed of the actual processor. 
- Further, the effect of the context switching up on the performance of the RR scheduling algorithm.
- A certain amount of time is utilized in switching from one process to another. 
In this time the registers and the memory maps are loaded, a number of lists and tables are updated; memory cache is flushed and reloaded etc.
- Lesser the size of the time quantum, context switching will occur more number of times. 


Wednesday, May 8, 2013

Explain the concept of Spooling and Buffering?


Concept of Spooling

- In the field of computer science, the ‘simultaneous peripheral operations on – line’ has been shortened down to the acronym ‘spool’. 
- A SPOOL software such as that of the IBM’s ‘SPOOL system’ was used by the computer systems in the time period from late 1950 to early 1960. 
- From one medium to another, files could be copied using this software. For example:
  1. From tape to punch card
  2. From punch card to tape
  3. From tape to printer
  4. From one card to another card
- IBM released less expensive software called the IBM 1401 that from some time brought down the application of the spool software.
- The print spooling is the most common application of this concept.
- The documents to be printed are formatted and stored at an area of the disk and retrieved when the print command is given.
- The printer prints out these documents at its defined rate. 
- Typically, at a time only one document can be printed by a printer and for doing so it takes a few minutes or seconds depending up on how fast it is. 
Spooling speeds up this process, but how? 
- With a spool software many documents can be written to the print queue by the multiple processes without having to wait. 
- As soon as the process wrote its document in the spooling device, it was free to carry out the other tasks. 
- At the same time another process would handle the printing of the document. 
If there was no spooling, the processor would not be able to continue until and unless the pending process is finished. 
- This would lead to long waits during processing and thus making the paradigm inefficient.

Concept of Buffering

- The physical memory storage has a region where it temporarily stores the data when it is being sent to another location.
- Typically whenever data is taken from some input device such as a keyboard or a mouse, it is stored in the buffer before sending it to the processor or output device. 
- Buffers can be implemented either through some virtual data buffer or in a fixed memory location.
- In majority of the cases, implementation of buffers is done with software that point to some location in the physical memory and use faster RAM. 
- The data access from buffers is quite fast when compared to that of the hard disk drives. 
- Buffers are used wherever a difference occurs between the rate of receiving data and rate of processing it.
- It also occurs if the two data rates are variable such as in online video streaming, printer spooler and so on. 
- The timing in a buffer is adjusted with the implementation of a FIFO algorithm or we can say a queue in the memory. 
- This would allow at the same time to write the data at the one end and read it from another end and both being done at different rates.
- Buffers are used along with I/O to hardware like in transmitting and receiving data in a network, disk drives, playing some song on the speakers etc.
- Buffers used in telecommunication are called the telecommunication buffers and make use of a storage medium or buffer routine.
- This routine compensates for the two different rates while receiving and sending data. 
- Buffers are also used in making interconnections between digital circuits working at different rates, for making timing corrections, delaying transmission time and so on. 


Sunday, March 3, 2013

What is the need of Agile Process Improvement?


It is commonly seen that a number of change projects are designed and published but none of actually goes into implementation. Most of the time is wasted in writing and publishing them. This approach usually fails. We should stop working with this methodology and develop a new one. Below mentioned are some common scenarios in the modern business:
  1. Developing a stronger project
  2. Changing the people working on it.
  3. Threatening that project with termination
  4. Appointment of a committee that would analyze the project
  5. Taking examples from other organizations to see how they manage to do it.
  6. Getting down to a dead project
  7. Tagging a dead project as still worth of achieving something.
  8. Putting many different projects together so as increase the benefit.
  9. Additional training
-Drops in the delivery of the normal work always follow a change. 
-Big change projects are either dropped or rejected.
-It all happens because the changes introduced by such projects are mandatory to be followed.
-This threatens the normal functioning of the organization. 
-So, the organization is eventually compelled to kill the whole process and start with the old way of work again. 
-Instead of following this approach, a step by step process improvement can be followed that is nothing but the agile process improvement. 
Now you must be convinced why agile process improvement is actually needed. 
The changes needs to be adaptive then only the process will be balanced. 
- An example is the CMMI maturity level. It takes 2 years approx. for completion and brings in the following:
  1. Restructuring
  2. New competitors
  3. New products
-Only agile methods make these changes adaptive in nature.
-The change cycles when followed systematically produce results in every 2 – 6 weeks.
-Thus, your organization’s workload and improvement stay perfectly balanced. -The early identification of the issues becomes possible for the organizations thus giving you it a chance to be resolved early. 
-By and by the organization learns to tackle the problems and how to improve work.
-At the end it is able to adapt to the every changing needs of the business.
-The responsibility of the deployment and evaluation of the improvement is taken by the PPQA. 
-Whole process is implemented in 4 sprints:
  1. Prototyping
  2. Piloting
  3. Deploying
  4. Evaluating
-A large participation and leadership is required for these changes to take place.
-Some other agile techniques along with scrum can also be used in SPI.
-We can have the improvements continuously integrated in to the way the organization works. 
-The way of working can also be re-factored including assets and definitions by having an step by step integration of the improvements.
-Pair work can be carried out on improvements. 
-A collective ownership can be created for the organization. 
-Evaluations and pilots can be used for testing purpose. 
-In order to succeed with the sprints is important that only simple solutions should be developed. 
-An organization can write the coaching material with the help of the work description standards.
-This sprint technique helps the organization to strike a balance between the improvement and the normal workload. 
-In agile process improvement simple solutions are preferred over the complex ones.
-Here, the status quo and the vision are developed using the CMMI and SCAMPI. 
-Status quo and vision are necessary for the beginning of the software process improvement.
-SPI when conducted properly produces useful work otherwise unnecessary documentation has to be produced.
-An improvement in the process is an improvement in the work. 
-Improving work is what that is preferred by people. 


Thursday, November 1, 2012

What is Keyword driven testing? What are base requirements for keyword driven testing?


Table driven testing, action word testing or key word driven testing, whatever you may call it, it is one and the same thing – a software testing methodology that has been especially designed for automated testing.

However, the approach followed by this testing methodology is quite different and involves dividing the test creation process in to 2 distinct phases namely:
  1. Planning phase and the
  2. Implementation phase

What is Keyword Driven Testing?

- It has been specially designed for automated testing.
- It does not mean that it cannot be employed for carrying out manual testing. - It can be very well used for manual testing as well. 
- The biggest advantage provided by the automated tests is of re-usability and so all this has eased up the maintenance of the tests that were developed at an abstraction level that was too high. 
- To say in simple words, one or more atomic test steps together form a key word.
- The first phase i.e., the planning phase involves the preparation of the testing tools and test resources. 
- The second phase i.e., the implementation phase depends up on the frame work or tool and thus differs accordingly. 
- Often a frame work is implemented by the automation engineers which has key words like enter and check. 
- This makes it easy for the test designers who do not have any knowledge of programming, to design the test cases based on such key words, that have already been defined by the engineers in planning phase that has already been implemented. 
- Such designed test cases are executed via a driver. 
- The purpose of this driver is to read the key word and execute the corresponding code.
- There are other testing methodologies which put everything right in to the implementation phase instead of performing the test designing and engineering separately.
- In such a case, the test automation is only confined to the test design.
- There are some keywords that are created using the tools which had the necessary code already written for them such as edit and check. 
- This helps in cutting down the extra number of engineers in the process of test automation. 
- This makes the implementation of the keyword a part of the tool. 

Advantages of Key word Driven Testing

There are some advantages of key word driven testing as stated below:
  1. Concise test cases.
  2. Test cases are readable only by the stake holders.
  3. Easily modifiable test cases.
  4. Easy reuse of the existing keywords by the new test cases.
  5. Keywords can be re used simultaneously across multiple test cases.
  6. Independent of programming languages as well as specific tool.
  7. Labor gets divided.
  8. Less requirement of tool and programming skills.
  9. Lower domain skills required for keyword implementation.
  10. Layer abstraction.

Disadvantages of Keyword Driven Testing

1.   A longer time is required for marketing when compared to manual testing.
2.   Initially, high learning curve.

Base Requirements of Key word driven Testing

  1. Full separation of test development and test automation processes: The separation of these two processes is very much required for test automation since both of them have very different skill requirements. The fundamental idea behind this is that the testers should not be programmers. Testers should have the ability of defining test cases which can be implemented without having to bother about the underlying technology.
  2. The scope of the test cases must be clear and differentiated: The test cases must not deviate from their scope.
  3. Right level of abstraction must be used for writing the tests: Tests must be written at levels such as lower user interface, higher business level etc. 


Sunday, August 5, 2012

What is meant by Synchronization? How do you implement it in WinRunner?


The field of computer science has to deal with not one but two types of synchronization processes namely:
  1. Synchronization of processes called process synchronization and
  2. Synchronization of data called data synchronization.
Both of these two synchronization processes are quite different from each other but are related in a way. 
- In the synchronization of the processes, a certain number of processes have to come together and join in order to commit a certain action or reach an agreement. 
whereas,
- In the synchronization of the data the idea followed is of the data integration i.e., the integrity of the data is maintained by multiple copies of the data set kept in coherence with each other.

Now coming to their relation, these two distinct synchronization processes are related in the way that the former process i.e. the data synchronization can only be achieved by process synchronization primitives.
In process synchronization two particular mechanisms are applied in concurrency for ensuring that the execution of the specific portions of a software program at the same instant of time by the execution of the two concurrent processes or threads does not takes place. 

The process synchronization is quite often used in controlling the access in the following:
  1. Multi- processing systems that are small scale.
  2. Multi- processor computers and
  3. Multi- threaded environment
  4. Distributed computers that are constituted by 1000 of units.
  5. Banking systems
  6. Web servers
  7. Data base systems and so on.
Now coming to the data synchronization, there are several examples such as the following:
1. Cluster file systems in a computing cluster
2. File synchronization
3. RAID
4. Cache coherency
5. Journaling
6. Data base replication

How is Synchronization implemented in Win Runner?

- Synchronization also exists in winrunner.
- There exists a certain synchronization point in the test script whose responsibility is to give the instructions to the winrunner regarding the suspension and continuation of the tests that are running until the software system or application that is under testing is ready for release.
- With the help of these synchronization points most of the anticipated timing problems can be solved.
- During the analog testing, the role of the synchronization point is to make sure that the window is re-positioned at a specific location by the winrunner software. 
- When a corresponding test is run, the mouse cursor is expected to move across the defined coordinates and this re-positioning of the window helps the cursor to make exact contact and that too with the proper elements of the window.
- At the point of synchronization a mismatch between the speeds of the script execution and application are witnessed.
- Three types of synchronization points have been defined:
  1. Screen area bitmap
  2. Object/ window bitmap
  3. Object/ window property
- Another role of the synchronization point is to start and stop the application as and when required till the load of a certain user event gets completed.
- Whenever there is a difference in the speeds of the application speed and the tool speed then the synchronization process is always preferred for syncing of both the application and tool to match.
- There are certain functions called wait functions available for the same purpose as the synchronization points but it is always better to go with the synchronization points since they are capable of maintaining better uniformity among the test scripts and application. 
- For example the synchronization can be carried out for:
  1. Retrieving the information from data base.
  2. For making a progress bar complete 100 % and so on. 


Sunday, July 8, 2012

What types of documents one need for QA, QC, and Software Testing?


For every process, some documents are vital and in this article we have discussed about the type of documents required in the below mentioned three processes:
       1. QA or quality assurance
       2. QC or quality control and
       3. Software testing

First let us see what are these three processes? 

Quality Assurance


"Quality assurance is a process that involves implementation of planned and systematic activities in a quality software system or application so that all the quality requirements of the software system or application under question are met". 

It has also got the following attributions:
  1. Systematic measurement
  2. Comparison with a standard
  3. Monitoring of processes
  4. Associated feedback loop
  5. Error prevention
This whole process deals with two principles:
  1. Fit for purpose and
  2. Right first time

Quality Control Process


- It is a process involving a wholesome review of the quality of all the factors that have direct as well as indirect involvement in the production of the software or application. 
- This process is one of the best practices that are used for inspection of the software systems as well as applications. 
- The software products and artifacts are put through a visual examination.
- The developers or testers who are to examine the software system or application are provided with a list containing the description of the unacceptable software defects.
- The products and artifacts are visually tested and the defects are dug out and are reported to the management who are responsible for taking the action against software product release.
This process plays a great role in the stabilization of the software production process.

Software Testing


- Software testing is a self justifying term and is just like an investigation seeking out defects and flaws in software systems and applications. 
- All the stake holders get to know about the quality of the software system or application under test.
- Tools here used are nothing but normal testing techniques that are intended to dig out the bugs and errors. 
- This process verifies:
  1. Meets the requirements as stated in its documentation.
  2. Works in the desired way.
  3. Whether its implementation with same characteristics is possible or not.
  4. Satisfaction of the stake holders.
Now let us mention what all documents are required for all the above discussed three processes:
  1. First main document is the software requirements specifications document.
  2. Use  cases document
  3. Solution document
  4. Software design documents
  5. Test plan document: this document should contained detail description of the following:
(a)  Scope of the functionality the test case will test.
(b)  Expected outcome
(c)  Technique used
  1. Test cases documentation containing procedure as well as the obtained results.
  2. Business requirements documents
  3. Functional specifications documents
  4. Project member details documents containing information about the team members including testers, test lead, pm etc.
  5. Software testing schedule document.
  6. Traceability matrix: this document is used to check whether or not the test cases match with the requirements stated in the SRS.
  7. Documents which are specific to a particular organization for quality control.
  8. Discovery documents (only for quality control): this document states the business needs.
  9. Test reports
  10. Bug reports: this report includes all the missing, additional, wrong deviations in the functionalities or features of the software system or application.
  11.  Release report: obtained at the end of testing.
  12. Test scenarios
  13. Test case templates
  14. Test case form
  15. Logs
  16. Weekly status reports
  17. Test scripts
  18. Resolution
  19. Test bed
Business requirements specification (BRS) and software requirements specification (SRS) are a must for the quality assurance and quality control processes. 


Wednesday, June 20, 2012

What are the advantages of smoke testing?


Smoke testing is one of the quick and dirty tests that we have in the software testing fields because of which the major functions of the software system or applications can be worked out without bothering about the finer details of the code and implementation. 

When and how smoke testing is performed?


- Smoke testing is basically performed when the software system or application is thought to be absolutely new.
- After the build is received, it is run and here the smoke testing comes in to play and involves checking whether the build is stable or not.
- A smoke test is not a singular test but rather a series of tests that are carried out on the software system or application before the actual full scale testing of the application is commenced.
- It would not be wrong if you say that the smoke testing is type of non- exhaustive test. 
Furthermore, in some cases the smoke testing also plays the part of the acceptance test that is run prior to the introduction of a new build to the main structure of the software system or application and before the regression or integration process.

In this article we will be discussing about the advantages of the smoke test but before this let us see what all are its characteristics:
  1. Smoke test is scripted either as an automated test or a manually written test.
  2. It is designed to focus on all the parts of the application like a cursor can do.
  3. It is shallow.
  4. It is wide.
  5. Ensures the working of the most crucial functions of a program.
  6. Ensures that the build is not broken.
  7. Verifies readiness of a build to be tested.
  8. It is not a substitute for actual functional test.

Advantages of Smoke Testing


Now let us chalk out the advantages of the smoke testing:

  1. Carrying out smoke testing at various stages reduces the problem of integration. The risk of integration is minimised. Most of the teams fear facing this risk that a project in which they have to integrate or combine code up on which they have been working individually and it may not work well. At this stage only the incompatibility of the software system is discovered. If the integration takes place earlier than smoke testing then the debugging process will take a lot of time and may require re- implementation and re- designing of the whole system. In most of the cases, projects have been cancelled due to errors during the integration. With the daily smoke tests, the integration errors can be reduced and runaway integration problems can be prevented.
  2. If the smoke test has been designed properly, it can detect errors and problems at an early stage.
  3. Since the smoke test detects the majority of the problems at an early stage, much time and efforts are saved.
  4. With smoke testing, the risk of low quality is reduced.
  5. With the daily smoke tests, the quality problems can be prevented from taking the control of the project.
  6. The smoke test can uncover major problems and defects as a consequence of wrong configuration.
Basically what happens in smoke testing is that the whole system is exercised from end to end and those errors and problems are stressed that cause the functioning of the whole system to stop. The smoke test is not so exhaustive but it does expose out major problems of the software system or application under test. The smoke testing at all stages ensures the working of the major functionality and keeps a check on the stability of the build. 


Facebook activity