Subscribe by Email


Showing posts with label Fault. Show all posts
Showing posts with label Fault. Show all posts

Tuesday, May 28, 2013

Concept of page fault in memory management

Page fault is also known as the pf or #pf and can be thought of as a trap that the hardware raises for the software whenever the program tries to access a page that has been mapped to an address space in the virtual memory but has not been loaded in the main memory. 

In most cases, the page fault is handled by the operating system by helping in accessing the required page at an address space in the main or the physical memory or sometimes by terminating the program if it makes an illegal attempt to the access the page.

- Memory management unit is the hardware that is responsible for detecting the page faults and is located in the processor. 
- The software that helps the memory management unit in handling the page faults is the exception handling software and is seen as a part of the OS. 
- ‘Page fault’ is not always an error.
- These are often seen as a necessary role player in increasing the memory. 
- This can be made available to the software applications that makes use of the virtual memory of  the operating system for execution.
- Hard fault is the term used by the Microsoft instead of page fault in the resource monitor’s latest versions.

Classification of Page Faults

Page faults can be classified in to three categories namely:

1. Minor: 
- This type of fault is also called the soft page fault and is said to occur when the loading of the page in to the memory takes place at the time of the fault generation, but the memory management unit does not mark it as being loaded in the physical memory. 
- A page fault handler is included in the operating system whose duty is to make an entry for the page that is pointed to by the memory management unit. 
- After making the entry for it, its task is to give an indication that the page has been loaded. 
- However, it is not necessary that the page must be read in to the memory. 
This is possible if the different programs share the memory and the page has been loaded in to the memory for the various applications. 
- In the operating systems that apply the technique of secondary page caching, the page can be removed from the working set of the process but not deleted or written to the disk.

2. Major: 
- Major fault is actually a fault that many operating systems use for increasing the memory for the program that must be available as demanded by the program. 
- The loading of the parts of the program is delayed by the operating system from the disk until an attempt is made by the program for using it and generating the page fault.
- In this case either a non – free page or a page in the memory has to be found by the page fault handler. 
- When the page is available, the data from it can be read by the operating system to the new page in the main memory, thus easily making an entry for the required page.

3. Invalid: 
- This type of fault occurs whenever a reference is made to an address that does not exists in the virtual address space and therefore it has no page corresponding to it in the memory. 
- Then the code by which the reference was made has to be terminated by the page fault handler and give an indication regarding the invalid reference. 


Sunday, June 3, 2012

What is meant by project velocity?


The project velocity is one of the terms that you come across while discussing about the iteration planning and release planning! The project velocity has a got a very important and  not to be ignored part to play in these two mentioned planning processes but still most of us are not aware of its importance. 
This article is centred on the project velocity and has been discussed in detail. Like the normal physics velocity, the project velocity gives the speed of the development of a software project. 

In other words, the project velocity gives the amount of work being and efforts being spent on the software project. 

About Project Velocity


- The project is simply the summation of all the estimates of the user stories that were involved in the iteration. 
- For the release planning, you add up the estimates of the user stories and for the iteration planning the estimates of the programming tasks are added up. 
- But anyway, both the factors can employed for determining the project velocity in the case of the iteration planning. 
- In the iteration planning meeting, the number of the user stories chosen by the customer is same as it was in the previous iteration. 
- There is a rule that the project velocity of the consecutive iterations must not exceed their preceding iterations. 
- These programming tasks are nothing but a broken down or divided version of the user stories. 
- The development team is supposed to take up or sign up for only the same number of tasks that were present in the previous iteration. 
- Such an arrangement proves to be a great help to the developers when they stuck in a sticky situation and need to recover and clean up from it and thus getting the average for the estimates. 
- The project velocity is suppose to rise when the developers are allowed to question the customers for other user stories when they have already finished their work and tasks like cleaning up are also accomplished.

Please do not think that you will get the project velocity consistent throughout the development cycle! It is expected to follow through some ups and downs. 
- But if a dramatic change is observed in the project velocity, then it is an issue of concern. 
But there is no need to worry since all this can be kept in check by re- estimation and re- negotiation of the release plan. 
- It is not just in this case that the project velocity may change! 
- Even when the system is put under production for the maintenance tasks, again the project velocity is subjected to changes. 
- Division of the project velocity by the length of the iteration or the number of people involved. 
- Furthermore, the number of the people involved in the iteration is not an appropriate way for making comparisons between the productivity of two products. 
- This is so because each and every team has got its own different criteria for estimating the user stories and so we get some high estimation and some low estimation. 
- Important is to keep a track of the amount of work being done on the project so that a steady project velocity for the development can be maintained that can also be easily predicted.

The problem comes while making the first estimation! 
- At least for the following iterations you will have a clue that what project velocity is required. 
- If this measure is used properly you may be able to detect a major fault in your project much before the time at which you would have known with the help of the traditional development methods. 


Monday, May 14, 2012

How to define boundaries between automation framework and a testing tool?


Automated software testing is difficult to be carried out on its own and thus requires support. This support is supplied by a frame work which is automated for all kinds of testing and is commonly known as the “test automation framework”. To define it formally we can say that it is a set of the below mentioned aspects:
  1. Assumptions whether true or false,
  2. Concepts providing support to the automated testing and,
  3. The tools that provide aid in performing the automated testing and so on.
About Automated Testing Framework and a Testing Tool
- The most beneficial advantage being the reduction in the high costs of the whole software testing life cycle or STLC. 
- Software testing life cycle is a very extensive process and requires a lot of efforts. 
- The test automation frame work alone cannot help in completing the software testing life cycle. 
There are some additional tools other than the test automation frame work that provide aid in the testing process.
- Most of the tools available nowadays are quite easy to use and deliver what they promise.
- Most of testing tools have been exclusively designed to take control of the whole testing process inclusive of the quality assurance check.
- The testing tools which create the test cases, create them based up on the requirements. 
- Some of the testing tools have been designed to carry out the user acceptance testing programs and they are also capable of tracking the testing environments.
- With so many testing methodologies around, an equivalent number of types of testing tools have been developed to carry out the corresponding tests.
- We mean to say that for every kind of testing there are particular testing tools. 

Often the software testing tools and test automation frame work are confused to be the same thing but this is not so. There is a considerable difference between the two. This article is focused up on the boundaries between the automation frame work and the testing tools.

How the work is reduced in the case of test automation framework? 
- The start-up scripts and the driver scripts remain the same in all the processes. 
- Only the test case file to whose test case the changes have been made is required to be updated whenever a change in the test case is detected. 
- Changes are not made to the driver scripts and the startup scripts since under ideal conditions there is no need to update the scripts whenever changes are introduced in to the software program or application. 

Now coming to the testing tools, how do they work? 
The testing tools build up on the requirements of the particular piece of software under test and can make many significant improvements in the following aspects of the development process:
  1. Productivity of the developers and programmers,
  2. Motivation of the development team,
  3. Quality of the software product and so on.
Purpose for which testing framework is used:
The following are the purposes for which the testing automation frame work is used:
  1. For defining a format for the expectations to be expressed in.
  2. For creating a mechanism for driving the software system or application.
  3. For execution of the test cases.
  4. For reporting of the results.F
Purpose for which testing tools are used:
  1. Monitoring the program.
  2. Simulating the instructional set.
  3. Repeating the system level tests
  4. Making the benchmarks or run time performance comparisons.
  5. Executing the program step by step.
  6. Symbolic debugging for the inspection of the programming variables.
  7. Fault detection
Test automation frame work itself can be thought of as a testing tool.


Tuesday, December 27, 2011

What are different characteristics of Scalability Testing?

Scalability can be essentially defined as the ability of a software application, network, process or program to effective and gracefully handle the increasing workload and effectively and easily carry out the specified tasks assigned properly. Throughput is the best example for this ability of a software application.

- Scalability as such is very difficult to define without practical examples.
- Therefore, scalability is defined based on some dimensions.
- Scalability is very much needed in communication areas like in a network, in software applications, in handling huge databases and it is also a very important aspect in routers and networking.
- Software applications and systems having the property of scalability are called scalable software systems or applications.
- They improve throughput to surprising extent after addition of new hardware devices. Such systems are commonly known as scalable systems.
- Similarly if a design, network, systems protocol, program or algorithm is suitable and efficient enough and works well when applied to greater conditions and problems in which the input data is in large amount or the problem or situation has got several nodes, they are said to be efficiently scalable.

If, during the process of increasing the quantity of input data the program fails, the program is not said to scale. Scalability is so much needed in the field of information technology. Scalability can be measured in several dimensions. Scalability testing deals with testing of these dimensions only.

The kinds of scalability testing have been discussed in detail below:

- Functional scalability testing:
In this testing new functionalities which are added to the software application or the program to enhance and improve its overall working are tested.

- Geographic scalability testing:
This testing tests the ability of the software system or the application to maintain its performance and throughput, and usefulness irrespective of distributing of working nodes in some geographical pattern.

- Administrative scalability testing:
This testing deals with the increment of working nodes in software, so that a single difficult task is divided among smaller units making it much easier to accomplish.

- Load scalability testing:
This testing can be defined as the testing of the ability of a divided program to divide further and unite again to take light and heavy workload accordingly.

There are several examples available for scalability today. Few have been listed below:

- Routing table of the routing protocol which increases in extent with respect to the increase in the extent of network.
- DBMS (data base management system) is scalable in the sense that more and more data can be uploaded to it by adding new required devices.
- Online transaction processing system can also be stated as scalable as one can upgrade it and more transactions can be done easily at one time.
- Domain name system is a distributed system and works effectively even when the hosting is on the level of World Wide Web. It is scalable.

Scaling is done basically in two ways. These two ways have been discussed below:

- Scaling out or scaling horizontally: This method of involves addition of several nodes or work stations to an already divided or distributed software application. This method has led to the development of technologies namely batch processing management and remote maintenance which were not available before the discovery of this technology.

Scaling up or scaling vertically:
Scaling up or scaling vertically can be defined as the addition of hardware or software resources to any single node of the system. These resources can either be CPUs or memory devices. This method of scaling has led to a tremendous improvement in virtualization technology.


Monday, December 5, 2011

What are different characteristics of stability testing?

Stability testing in the context of software testing and engineering as it itself
indicates, refers to the many attempts to determine if an application will crash.

- Stability testing seeks to find a fault, an error, a bug or a reason that can render the software system or the application as non working or which can make the application lame.
- The main objective of the stability testing is to determine if there are any grounds on the basis of which the software system or application should be kept void of certification and it also aims at finding some positive point on the basis of which the software system or the application can be granted the certification.

- For a software system or application to be certified, it should be in functional state and basically stable.
- This can only be done by applying specific and suitable criteria and tests for testing functionality and stability. This is nothing but the stability testing.

There are several criteria available for stability testing. Few have been discussed below in detail:

1. Pass or fail criteria
- Each primary function tested and then the results are recorded.
- Each and every individual function is operated or executed in a way that is apparently consistent with its objective or the purpose regardless of the degree of correctness of its result or output.
- The observations are recorded for analysis.
- It is certain that out of so many functions, at least one or two primary functions will prove incapable of operating in a way that it apparently consistent to their aim or purpose.

2. Functional ability of a software system or the application
- There is for sure some impairment in any software system.
- But that does not necessarily mean that the software system is not fit for normal use.
- Any software product or application is said to work abnormally in a way that it seriously impairs it for the normal usage.

3.Disruption criteria
- The software system or the application is observed to disrupt the normal functioning of the operating system.

4.Criteria of in-operability
- No primary function of the software system or application is observed to get obstructed and become inoperable and non functional during the course of the testing.
- There is at least one primary function of the software system or application is observed to get obstructed and become inoperable and non functional during the course of the testing.

5. The software system or the application is observed to crash, fail, loose data and hang.

Some Important Points:
- Stability can be defined as the ability of a software system or application to continue functioning in the case of over time and over its full range of functionalities without crashing, failing or hanging and loss of data.

- For a tester to know whether the software system is seriously unfit for normal and regular use, he /she is required to have a knowledge of how that software system or application works in a normal environment i.e., normal user and normal usage.

- In order to carry out the stability test, the tester requires knowing the types of data values which software system or application can effectively and efficiently process.

- To test for the instability of a software system, the tester needs to use his knowledge in order to give some challenging inputs to the system to fail the system.


Wednesday, October 6, 2010

How to choose a black box or a white box test?

White box testing is concerned only with testing the software product; it cannot guarantee that the complete specification has been implemented. Black box testing is concerned only with testing the specification; it cannot guarantee that all parts of the implementation have been tested. Thus, black box testing is testing against the specification and will discover faults of omission, indicating that part of the specification has not been fulfilled. White box testing is testing against the implementation and will discover faults of commission, indicating that part of the implementation is faulty. In order to completely test a software product both black and white box testing are required.

White box testing is much more expensive in terms of resources and time as compared to black box testing. It requires the source code to be produced before the tests can be planned and is much more laborious in the determination of suitable input data and the determination if the software is correct or incorrect. It is advised to start test planning with a black box testing approach as soon as the specification is available. White box tests are to be planned as soon as the low level design (LLD) is complete. The Low Level Design will address all the algorithms and coding style. The paths should then be checked against the black box test plan and any additional required test cases should be determined and applied.

The consequences of test failure at requirements stage are very expensive. A failure of a test case may result in a change, which requires all black box testing to be repeated and the re-determination of the white box paths. The cheaper option is to regard the process of testing as one of the quality assurance rather than quality control. The intention is that sufficient quality is put into all previous design and production stages so that it can be expected that testing being relied upon to discover any faults in the software, as in case of quality control.


Tuesday, August 31, 2010

Features of Software Reliability Testing and what are reliability techniques.

Computer systems are an important part of our society. Reliability refers to the consistency of a measure. A test is considered reliable if we get the same result repeatedly. Software Reliability is the probability of failure-free software operation for a specified period of time in a specified environment. Software Reliability is also an important factor affecting system reliability.
A completely different approach is “reliability testing”, where the software is subjected to the same statistical distribution of inputs that is expected in operation.
Reliability testing will tend to uncover earlier those failures that are most likely in actual operation, thus directing efforts at fixing the most important faults.
The fault-finding effectiveness of reliability testing to deliver on its promise of better use of resources, it is necessary for the testing profile to be truly representative of operational use.
Reliability testing is attractive because it offers a basis for reliability assessment.
Reliability testing may be performed at several levels. Complex systems may be tested at component, circuit board, unit, assembly, subsystem and system levels.
A key aspect of reliability testing is to define "failure".

Software Reliability Techniques
- Trending reliability tracks the failure data produced by the software system to develop a reliability operational profile of the system over a specified time.
- Predictive reliability assigns probabilities to the operational profile of a software system.


Facebook activity