Tuesday, May 28, 2013
Concept of page fault in memory management
Posted by
Sunflower
at
5/28/2013 03:00:00 PM
0
comments
Labels: Address, Fault, Hardware, Invalid, Main Memory, Major, Memory, Memory management, Minor, Operating System, Page Fault, pages, Physical, Processor, program, Software, Types, Virtual Memory
![]() | Subscribe by Email |
|
Sunday, June 3, 2012
What is meant by project velocity?
About Project Velocity
Posted by
Sunflower
at
6/03/2012 05:00:00 AM
0
comments
Labels: Customers, Developers, Development, Effort, Estimation, Fault, Iteration, Iterative Planning, Length, Programming, Project Velocity, Release Planning, Software Project, Speed, Tasks, User stories, Users, Work
![]() | Subscribe by Email |
|
Monday, May 14, 2012
How to define boundaries between automation framework and a testing tool?
- Assumptions
whether true or false,
- Concepts providing support to the automated testing and,
- The tools that provide aid in performing the automated testing and so on.
- Productivity
of the developers and programmers,
- Motivation
of the development team,
- Quality
of the software product and so on.
- For
defining a format for the expectations to be expressed in.
- For
creating a mechanism for driving the software system or application.
- For
execution of the test cases.
- For
reporting of the results.F
- Monitoring
the program.
- Simulating
the instructional set.
- Repeating
the system level tests
- Making
the benchmarks or run time performance comparisons.
- Executing
the program step by step.
- Symbolic
debugging for the inspection of the programming variables.
- Fault
detection
Posted by
Sunflower
at
5/14/2012 11:55:00 PM
0
comments
Labels: Application, Automation, Boundaries. Focus, Cost, Debug, Developers, Fault, Framework, Programmers, Purpose, Quality, Requirements, Results, Test /automation framework, Test cases, Testers, Testing tools, Tests, Tools
![]() | Subscribe by Email |
|
Tuesday, December 27, 2011
What are different characteristics of Scalability Testing?
Scalability can be essentially defined as the ability of a software application, network, process or program to effective and gracefully handle the increasing workload and effectively and easily carry out the specified tasks assigned properly. Throughput is the best example for this ability of a software application.
- Scalability as such is very difficult to define without practical examples.
- Therefore, scalability is defined based on some dimensions.
- Scalability is very much needed in communication areas like in a network, in software applications, in handling huge databases and it is also a very important aspect in routers and networking.
- Software applications and systems having the property of scalability are called scalable software systems or applications.
- They improve throughput to surprising extent after addition of new hardware devices. Such systems are commonly known as scalable systems.
- Similarly if a design, network, systems protocol, program or algorithm is suitable and efficient enough and works well when applied to greater conditions and problems in which the input data is in large amount or the problem or situation has got several nodes, they are said to be efficiently scalable.
If, during the process of increasing the quantity of input data the program fails, the program is not said to scale. Scalability is so much needed in the field of information technology. Scalability can be measured in several dimensions. Scalability testing deals with testing of these dimensions only.
The kinds of scalability testing have been discussed in detail below:
- Functional scalability testing:
In this testing new functionalities which are added to the software application or the program to enhance and improve its overall working are tested.
- Geographic scalability testing:
This testing tests the ability of the software system or the application to maintain its performance and throughput, and usefulness irrespective of distributing of working nodes in some geographical pattern.
- Administrative scalability testing:
This testing deals with the increment of working nodes in software, so that a single difficult task is divided among smaller units making it much easier to accomplish.
- Load scalability testing:
This testing can be defined as the testing of the ability of a divided program to divide further and unite again to take light and heavy workload accordingly.
There are several examples available for scalability today. Few have been listed below:
- Routing table of the routing protocol which increases in extent with respect to the increase in the extent of network.
- DBMS (data base management system) is scalable in the sense that more and more data can be uploaded to it by adding new required devices.
- Online transaction processing system can also be stated as scalable as one can upgrade it and more transactions can be done easily at one time.
- Domain name system is a distributed system and works effectively even when the hosting is on the level of World Wide Web. It is scalable.
Scaling is done basically in two ways. These two ways have been discussed below:
- Scaling out or scaling horizontally: This method of involves addition of several nodes or work stations to an already divided or distributed software application. This method has led to the development of technologies namely batch processing management and remote maintenance which were not available before the discovery of this technology.
Scaling up or scaling vertically:
Scaling up or scaling vertically can be defined as the addition of hardware or software resources to any single node of the system. These resources can either be CPUs or memory devices. This method of scaling has led to a tremendous improvement in virtualization technology.
Posted by
Sunflower
at
12/27/2011 07:15:00 PM
0
comments
Labels: Algorithm, Application, Communication, Databases, Design, Effective, Fault, Hardware, Network, Quality, Requirements, Resources, Scalability, Scalability testing, Software Systems, Tasks, Throughput
![]() | Subscribe by Email |
|
Monday, December 5, 2011
What are different characteristics of stability testing?
Stability testing in the context of software testing and engineering as it itself
indicates, refers to the many attempts to determine if an application will crash.
- Stability testing seeks to find a fault, an error, a bug or a reason that can render the software system or the application as non working or which can make the application lame.
- The main objective of the stability testing is to determine if there are any grounds on the basis of which the software system or application should be kept void of certification and it also aims at finding some positive point on the basis of which the software system or the application can be granted the certification.
- For a software system or application to be certified, it should be in functional state and basically stable.
- This can only be done by applying specific and suitable criteria and tests for testing functionality and stability. This is nothing but the stability testing.
There are several criteria available for stability testing. Few have been discussed below in detail:
1. Pass or fail criteria
- Each primary function tested and then the results are recorded.
- Each and every individual function is operated or executed in a way that is apparently consistent with its objective or the purpose regardless of the degree of correctness of its result or output.
- The observations are recorded for analysis.
- It is certain that out of so many functions, at least one or two primary functions will prove incapable of operating in a way that it apparently consistent to their aim or purpose.
2. Functional ability of a software system or the application
- There is for sure some impairment in any software system.
- But that does not necessarily mean that the software system is not fit for normal use.
- Any software product or application is said to work abnormally in a way that it seriously impairs it for the normal usage.
3.Disruption criteria
- The software system or the application is observed to disrupt the normal functioning of the operating system.
4.Criteria of in-operability
- No primary function of the software system or application is observed to get obstructed and become inoperable and non functional during the course of the testing.
- There is at least one primary function of the software system or application is observed to get obstructed and become inoperable and non functional during the course of the testing.
5. The software system or the application is observed to crash, fail, loose data and hang.
Some Important Points:
- Stability can be defined as the ability of a software system or application to continue functioning in the case of over time and over its full range of functionalities without crashing, failing or hanging and loss of data.
- For a tester to know whether the software system is seriously unfit for normal and regular use, he /she is required to have a knowledge of how that software system or application works in a normal environment i.e., normal user and normal usage.
- In order to carry out the stability test, the tester requires knowing the types of data values which software system or application can effectively and efficiently process.
- To test for the instability of a software system, the tester needs to use his knowledge in order to give some challenging inputs to the system to fail the system.
Posted by
Sunflower
at
12/05/2011 05:56:00 PM
0
comments
Labels: Application, Bugs, Criteria, Disrupt, Errors, Fault, Functional, Functionality, Importance, Objectives, Quality, Software Systems, Stability, Stability testing, state, Tests
![]() | Subscribe by Email |
|
Wednesday, October 6, 2010
How to choose a black box or a white box test?
White box testing is concerned only with testing the software product; it cannot guarantee that the complete specification has been implemented. Black box testing is concerned only with testing the specification; it cannot guarantee that all parts of the implementation have been tested. Thus, black box testing is testing against the specification and will discover faults of omission, indicating that part of the specification has not been fulfilled. White box testing is testing against the implementation and will discover faults of commission, indicating that part of the implementation is faulty. In order to completely test a software product both black and white box testing are required.
White box testing is much more expensive in terms of resources and time as compared to black box testing. It requires the source code to be produced before the tests can be planned and is much more laborious in the determination of suitable input data and the determination if the software is correct or incorrect. It is advised to start test planning with a black box testing approach as soon as the specification is available. White box tests are to be planned as soon as the low level design (LLD) is complete. The Low Level Design will address all the algorithms and coding style. The paths should then be checked against the black box test plan and any additional required test cases should be determined and applied.
The consequences of test failure at requirements stage are very expensive. A failure of a test case may result in a change, which requires all black box testing to be repeated and the re-determination of the white box paths. The cheaper option is to regard the process of testing as one of the quality assurance rather than quality control. The intention is that sufficient quality is put into all previous design and production stages so that it can be expected that testing being relied upon to discover any faults in the software, as in case of quality control.
Posted by
Sunflower
at
10/06/2010 01:50:00 PM
0
comments
Labels: Black box testing, Bugs, Choice, Fault, Organization, Product, Quality, Resources, Software, Software testing, Specification, Test, White box testing, White box testing. Black box testing
![]() | Subscribe by Email |
|
Tuesday, August 31, 2010
Features of Software Reliability Testing and what are reliability techniques.
Computer systems are an important part of our society. Reliability refers to the consistency of a measure. A test is considered reliable if we get the same result repeatedly. Software Reliability is the probability of failure-free software operation for a specified period of time in a specified environment. Software Reliability is also an important factor affecting system reliability.
A completely different approach is “reliability testing”, where the software is subjected to the same statistical distribution of inputs that is expected in operation.
Reliability testing will tend to uncover earlier those failures that are most likely in actual operation, thus directing efforts at fixing the most important faults.
The fault-finding effectiveness of reliability testing to deliver on its promise of better use of resources, it is necessary for the testing profile to be truly representative of operational use.
Reliability testing is attractive because it offers a basis for reliability assessment.
Reliability testing may be performed at several levels. Complex systems may be tested at component, circuit board, unit, assembly, subsystem and system levels.
A key aspect of reliability testing is to define "failure".
Software Reliability Techniques
- Trending reliability tracks the failure data produced by the software system to develop a reliability operational profile of the system over a specified time.
- Predictive reliability assigns probabilities to the operational profile of a software system.
Posted by
Sunflower
at
8/31/2010 06:11:00 PM
0
comments
Labels: Effective, Failure, Fault, Operational, Reliability, Reliability testing, Reliable, Resources, Systems, Techniques
![]() | Subscribe by Email |
|