Sunday, July 21, 2013
Comparison between Virtual Circuit and Datagram subnets
Posted by
Sunflower
at
7/21/2013 08:54:00 PM
0
comments
Labels: Bandwidth, Communication, Congestion, Crash, Datagram Subnets, Destination, Differences, Memory, Packets, Phases, Resources, Router, Routing, Source, Time, Virtual Circuits
![]() | Subscribe by Email |
|
Tuesday, June 11, 2013
Analytics - Measuring data relating to user information - Part 7
In some of the previous posts, I have been taking some examples that show where decisions with regard to the product can be taken on the basis of data collected from the product in the hands of customers. In this post, I will take another example of the same - a case where there were certain reports from the customers and the product management was not sure about whether the feedback was proper, and looking for more evidence to substantiate or confirm the problems. So, for this example, there was feedback from several channels that talked about how customers were perceiving that the quality level of the product was not as good as previous versions and this was manifested in more cases of crashes.
Now, this is good feedback, but can you take this as gospel truth. On the surface, this seems like clear feedback that you should recognize and take action accordingly. So, there should be some kind of investigation that would cause you to behave differently from how you have behaved after previous releases, since that will be in customer interest. You would need to commit more effort to investigation and solutions to quality problems; and even though this might seem to be in customer interest, this is a cost to the ongoing product development. And there is the contra view - with more means of expressing discontent such as user forums, community groups and the same, there is the possibility that the quality level is the same as that of previous levels, it's just that the information collection systems are catching more of this discontent.
Now, you are stuck. Both sound good, but you need to take a decision one way or the other. This is where data collection from user systems works good. One of the first items you should be capturing is where information about whether the user application has closed normally, or whether the application has closed abnormally (such as in a crash or when the user was forced to terminate the application after it was hanging) and also capture this information for operating systems that are supported by the application. Further, you would need to do this for the different features and dialogs that are present in the application. Once you have been capturing this information, there is a lot you can do in terms of determining how often a feature or the entire application crashes in the hands of the users, where does the crash happen (this will need a lot of development effort though to determine what causes the crash in the application). This will also help to determine whether the frequency of crash is more than in previous versions of the application release.
Once you have such a data and the data has checked out to be accurate in some respects (for example, if your testing team is getting crashes in reasonably similar areas, then it helps to confirm this data to a large extent), you can make product level decisions. If that means that you need to spend some time on product stability and quality, then you need to do so; otherwise if the quality level seems fine, then you know that the information you are getting from the customers needs to be handled through regular support mechanisms and does not need the development team to spend extra effort.
Posted by
Ashish Agarwal
at
6/11/2013 09:56:00 PM
0
comments
Labels: Analytics, Application crash, Capturing user data, Crash, Data Analysis, Data measurement, Decision making, Feature requirements, Informed decision making, User data
![]() | Subscribe by Email |
|
Friday, April 26, 2013
What is the cause of thrashing? How does the system detect thrashing? Once it detects thrashing, what can the system do to eliminate this problem?
How can a system handle thrashing?
Posted by
Sunflower
at
4/26/2013 03:32:00 PM
0
comments
Labels: Application, Causes, Communication, CPU, Crash, Data, Detect, Instruction, Memory, OS, Page Fault, pages, Paging, Performance, Physical, program, System, Techniques, Thrashing, Utilization
![]() | Subscribe by Email |
|
Saturday, September 1, 2012
What are the types of Exception available in Win Runner? How do you handle an Exception in WinRunner?
- Pop up exceptions
- Object exceptions
- TSL exceptions and lastly
- Web exceptions
How are different types of exceptions are handled in WinRunner?
- User defined handlers: The names of these handlers can be specified
by clicking on the user defined function name and changing it according to
your own will.
- Default actions: Winrunner makes its own choice whether to press ok
or cancel option. The desired default handler can be selected in the
dialog box.
- Web_ exception_ handler_ dialog_ click_ default: For activating the
default button.
- Web_ exception_ handler_ fall_ retry: For reloading the web page as
well as activating the default button.
- Web_ exception_ enter_ user name_ password: For using the given user
id and password.
- Web_ exception_ handler_ dialog_ click_ yes: For activating the yes
button.
- Web_ exception_ handler_ dialog_ click_ no: For activating the no button.
Posted by
Sunflower
at
9/01/2012 12:06:00 PM
0
comments
Labels: Application, Automated Software Testing, Automation, Crash, Dialog box, Display, Environment, Error, Event, Exceptions, Execution, Handlers, Object. TSL, Pop up, Testing tools, Tests, Tools, Web, Window, WinRunner
![]() | Subscribe by Email |
|
Friday, June 22, 2012
How is optimization of smoke testing done?
- The path was compiled with
too much of optimization and
- The data directory is not
pointed out properly by the path.
Steps for Optimization of Smoke tests
- Compiler bugs and errors or
- The path has not been set
properly.
Compiler Bugs and Errors
If path is not set properly
When does a system or an application crash?
How to optimize smoke tests?
Posted by
Sunflower
at
6/22/2012 05:00:00 AM
0
comments
Labels: Application, Bugs, Code, Compiler, Crash, Data, Errors, Execute, Methodology, optimization, Optimizing, Path, Re-compile, Smoke Testing, Smoke tests, Software Systems, Steps, Test cases, Testers, Tests
![]() | Subscribe by Email |
|
Thursday, February 23, 2012
What is meant by severity of a bug? What are different types of severity?
We all know what a software bug is! It is a flaw, error or mistake in the software system or application that can cause it to crash or fail. Pretty much simple!
But very few of us are actually aware about the severity of a bug i.e., how much destruction it can cause to a software system or application.
- Bugs are of course results of the mistakes made by the software programmers and developers while coding the software program.
- Sometimes incorrect compilation of the source code by the program can also cause bugs.
- A buggy program is very hard to clean.
- Bugs can have a chain reaction also i.e., one bug giving rise to another and that in turn giving rise to one more bug and so on.
- Each bug has its own level of severity that it causes to the software system or application.
- While some bugs can work out total destruction of the program, there are some bugs that do not even come in to detection.
- Some bugs can cause the program to go out of service.
- In contrast to these harmful bugs, there are other bugs which are useful such as security bugs.
WHAT IS SEVERITY OF A BUG & ITS TYPES
-"Severity can be thought of as a measure of the harm that can be caused by a bug."
- Severity is an indication of how bad or harmful a bug is.
- The higher the severity of a bug, the more priority it seeks.
- Severity of the bugs of software can sometimes be used as a measure of its overall quality.
- Severity plays a major role in deciding the priority of fixing the bug.
- It is important that the severity of the bugs is assigned in a way that is logical and easy to understand.
There are several criteria depending on which the severity of a bug is measured. Below mentioned is one of the most commonly used ranking scheme for measuring severity of bugs:
1.Severity 1 Bugs
bugs coming under this category cease the meaningful operations that are being operated by a software program or application.
2.Severity 2 Bugs
Bugs coming under this category cause the failure of the software features and functionalities. But, still the application continues to run.
3.Severity 3 Bugs
Bugs coming under this category can cause the software system or application to generate unexpected results and behave abnormally. These bugs are responsible for inconsistency of the software system.
4.Severity 4 Bugs
Bugs coming under these categories basically affect the design of a software system pr application.
COMPONENTS OF SEVERITY
Severity has two main components namely the following:
1. Impact
- It is a measure of the disruption that is caused to the users when they encounter a bug while working.
- There is a certain level to which there is an interference with the user performing a task.
- Even the impact is classified in to various levels.
2. Visibility
- It is the measure of the probability of encountering the bug in future or we can say that it is measure of the closeness of a bug to the execution path.
- It is the frequency of the occurrence of a bug.
The severity is calculated as the product of both the impact as well as visibility. A measure of perceived quality and usefulness of the software product is given by the severity. Therefore it would not be wrong to say that the severity provides an overall measure of the quality of the software system or application.
Posted by
Sunflower
at
2/23/2012 11:59:00 AM
0
comments
Labels: Abnormal, Application, Bugs, Code, Compile, Crash, Design, Developers, Errors, Failure, Functionality, Inconsistency, Measures, Mistakes, Priority, Quality, Severity, Software Systems, Types
![]() | Subscribe by Email |
|
Wednesday, January 4, 2012
What are different aspects of stress testing?
Stress testing can be defined as a form of testing that is carried out to determine the stability and the stress handling capacity of a software system or module. Stress testing is all about testing the software system or application beyond the normal operational capacities. It is the testing of software system or application to its breaking or fatal point. This is done in order to observe the results.
Stress testing has a much broader meaning. What is basically understood by a stress test?
- It is referred to a test that mainly focuses upon the availability, error handling and robustness of a software system or application.
- In stress testing the software system or application is subjected heavy loads of tasks.
- It is not about considering the proper behavior under the normal operational conditions or user environment.
Typically the goal of stress testing is to test whether or not the software system or application crashes or fails in the case of catastrophic problems like unavailability of sufficient computing resources. These computational resources may include disk space or memory. It is also done to determine if the system crashes or fails under the situation of denial of service attacks and unusually high concurrency.
Stress testing, load testing, volume testing all seem like much similar kinds of testing.
A look at the following examples of stress testing will clear up the confusions regarding stress testing:
- Stress testing for web server:
A web server can be subjected to stress testing using bots, scripts and several denial of service tools to determine its performance and behavior during the peak data and tasks load.
- Stress testing can be studied in contrast with load testing.
Load testing is basically carried out to examine the entire testing environment and the huge database. It is also carried out to determine the response time of the software system or application whereas the stress testing exclusively focuses upon identifying the transactions and pushing them to a level at which a break occurs in the execution of the transaction software system.
Another point is that during the stress testing if the transactions are duly stressed, then the chance is that the database may not experience a huge data load. However if the transactions are not stressed, the data base may experience a heavy work load.
SOME IMPORTANT POINTS:
- Stress testing is another word for system stress testing.
- It can be defined as the loading of the concurrent users beyond and over the level that can be handled by the system.
- This leads to the breakage of the weakest link in the whole software system or application.
- While carrying out the stress testing the software engineers, developers or testers need to test the whole software system or application under the desired expected stress as well as under the accelerated stress.
- The goal here is to determine the working life of the software system.
- It is also aimed at determining the modes of failure for the software system or application.
- For the hardware counterpart of a complete system, the stress testing can be defined as the subjecting of the concerned hardware to the exaggerated levels of stress.
- This is basically done to determine the stability of the hardware system when used in a normal environment rather than a testing environment.
- Before modifying the CPU parameters during the processes of over clocking, over volting, under volting and under clocking, it is necessary to verify whether or not the new CPU parameters like frequency and core voltage are suitable for taking the heavy CPU loads.
- Stress testing for such parameters is usually carried out by executing a CPU intensive program for a prolonged significant period of time. It is observed if the system crashes or hangs.
Posted by
Sunflower
at
1/04/2012 02:38:00 PM
0
comments
Labels: Application, Capacity, Crash, Focus areas, Goals, Load, Memory, Operational, Resources, Software testing, Stability, Stress, Stress testing, System Testing, Tasks, Tests, Users
![]() | Subscribe by Email |
|
Wednesday, December 14, 2011
What are different characteristics of recovery testing?
Recovery testing itself makes clear what it is by through its name. We all know what recovery means. To recover means to return to the normal state after some failure or illness etc. This qualitative aspect is also present in today’s software system or applications.
- The recovery of a software system or application is defined as its ability to recover back form some hardware failure, crashes and similar such problems that are quite frequent with computers.
- Before the release of any software it needs to be tested for its recovery factor. This is done by recovery testing.
- Recovery testing can be defined as the testing of software system or application to determine its ability to recover fatal system crashed and hardware problems.
One should always keep one thing in mind which is that recovery testing is not to be confused with reliability testing since reliability testing aims at discovering the points at which the software system or application tends to fail.
- In a typical recovery testing, the system is forced to fail or crash or hang in order to check how the recovery asset of the software system or application is responding and how much strong it is.
- The software system or application is forced to fail in a variety of ways.
- Every attempt is made to discover the failure factors of the software system or application.
Objectives of Recovery Testing
- Apart from the recovery factor, the recovery testing also aims at determining the speed of recovery of the software system.
- It aims to check how fast the software system or application is able to recover from a failure or crash.
- It also aims to check how better the system recovers.
- It checks the quality of the recovered software system or application. There is some type and extent to which the software is recovered.
- The types and extent are mentioned in the documentation in the requirements and specifications section.
- Recovery testing is all about testing the recovering ability of the software system or application i.e., how well it recovers from the catastrophic problems, hardware failures and system crashes etc.
The following examples will further clarify the concept of recovery testing:
1. Keep the browser in runny mode and assign it multiple sessions. Then just restart your system. After the system has booted in, check whether the browser is able to recover all of the sessions that were running previously before the restart. If the browser is able to recover, then it is said to have good recovering ability.
2. Suddenly restart your computer while an application is in running mode. After the boot in session check whether the data which was being worked upon by the application is still integrate and valid or not? If the data is still valid, integrate and safe the application has a great deal of recovery factor.
3. Set some application like file downloader or similar to that on data receiving or downloading mode. Then just unplug the connecting cable. After a few minutes plug in the cable back and let the application resume its operation and check whether the application is still able to receive the data from the point where it was left. If its not able to resume the data receiving then its said to have a bad recovery factor.
Recovery testing tests the ability of application software to restart the operations that were running just before the loss of the integrity of the applications. The main objective of recovery testing is to ensure that the applications continue to run even after the failure of the system.
Recovery testing ensures the following:
- Data is stored in a preserved location.
- Previous recovery records are maintained.
- Development of a recovery tool which is available all the time.
Posted by
Sunflower
at
12/14/2011 06:59:00 PM
0
comments
Labels: Ability, Application, Crash, Data, Errors, Factors, Failure, Hardware, Objectives, Operations, Problems, Quality, Recover, Recovery, Recovery Testing, Software Systems, Techniques
![]() | Subscribe by Email |
|
Tuesday, December 6, 2011
What are different characteristics of destructive testing?
Nothing checks the robustness of a software system or application better than destructive testing.
- Destructive testing is basically to determine the robustness of a software system or application.
- In order to determine the robustness of a software system or application, it is subjected to very harsh tests and attempts are made to make the program crash or hang.
- The aim of the destructive testing is to cause the failure of the program.
- The destructive testing has a typical process procedure. The process has the following aspects:
Waterfall development model or CMMI traditional model
- It is always been a common practice of performing software testing on software system by an independent group of software testers after the functionalities of the software system have been fully developed. D
- Destructive testing is carried out before it is delivered to the client or the customer.
- In this kind of practice, the testing stage is also used as a project buffer in order to compensate for project delays.
- But, this results in compromising with the time being consumed for testing.
- In the same model, there is another testing in which the software testing is started simultaneously with the start of the project and it is continued as a regular process until the project is finished.
Extreme development model or Agile model
- The extreme development model and the agile software development model follow the test driven software development model.
- Following this procedure, unit tests are carried out first by the software developers or engineers.
- These unit tests fail initially and this is expected also.
- As the code is written and subjected to these tests, more and more units of the software system pass those unit tests successfully.
- The test cases being used in the testing are updated regularly for the new faults and errors discovered.
- The tests cases thus updated are also made more efficient with the addition of the regression tests developed during the whole process.
- Unit tests are carried out simultaneously with the development and progress of the source code.
- They eventually become an integral part of the build process.
- This model aims at achieving continuous deployment i.e., to say that the software updates can be easily released for the public.
Although there exist some variations between the different models of testing, the testing cycle is typical and same for every kind of model. The following steps are involved in the testing cycle:
- Requirement analysis
- Test planning
- Test development
- Test execution
- Test reporting
- Test result analysis
- Defect retesting
- Regression testing
- Test closure
Posted by
Sunflower
at
12/06/2011 12:40:00 PM
0
comments
Labels: Aim, Application, Aspects, Characteristics, CMMi Model, Crash, Destructive testing, Extreme development, Models, Procedure, Robustness, Software Systems, Test cases, Tests, Waterfall model
![]() | Subscribe by Email |
|
Wednesday, September 1, 2010
What is Recovery Testing and what are its features.
Recovery testing tells how well an application is able to recover from a crash, hardware failure. Recovery testing should not be confused with reliability testing, which tries to discover the specific point at which failure occurs.
- Recovery is ability to restart the operation after integrity of application is lost.
- The time taken to recover depends upon the number of restart points, volume of application, training and skill of people conducting recovery activities and the
tools available for recovery.
- Recovery testing ensures that the operations can be continued after a disaster.
- Recovery testing verifies recovery process and effectiveness of recovery process.
- In recovery testing, adequate back up data is preserved and kept in secure location.
- Recovery procedures are documented.
- Recovery personnel have been assigned and trained.
- Recovery tools have been developed and are available.
To use recovery testing, procedures, methods, tools and techniques are assessed to evaluate the adequacy. Recovery testing can be done by introducing a failure in the system and check whether the system is able to recover. A simulated disaster is usually performed on one aspect of application system. Recovery testing should be carried for one segment and then on the other segment when there are many failures.
Recovery testing is used when the continuity of the system is needed inorder for system to perform or function properly.User estimates the losses, time span to carry out recovery testing. Recovery testing is done by system analysts, testing professionals and management personnel.
Posted by
Sunflower
at
9/01/2010 07:23:00 PM
0
comments
Labels: Applications, Black box testing, Crash, Features, Objectives, Recover, Recovery, Recovery Testing, System, Usage
![]() | Subscribe by Email |
|