Subscribe by Email


Showing posts with label Crash. Show all posts
Showing posts with label Crash. Show all posts

Sunday, July 21, 2013

Comparison between Virtual Circuit and Datagram subnets

Difference #1:
- In virtual circuits the packets are allowed to contain in them the circuit number rather than storing the full address of the destination. 
- This reduces the requirement for a much larger memory and bandwidth. 
- This also makes it cheaper in cost. 
- On the other hand, the data-grams have to contain the full destination address rather than a single circuit number.
- This causes a significant overhead in the data-gram sub nets. 
- Also, this leads to wastage of the bandwidth. 
- All this implies that the data-gram sub nets are more costly when compared to the virtual circuits. 

Difference #2:
- A set up phase is required for the virtual circuits. 
- For establishing this phase a lot of resources are required along with a lot of time. 
- Data-gram sub net in contrast does not require establishment of set up phase. 
- Hence, there is no requirement of resources.

Difference #3:
- In virtual circuits, for indexing purpose the circuit numbers are used by the router. 
- These numbers are stored in a table and are used for finding out the destination of the packet. 
- This procedure is quite simple when compared with the one used by the data-gram sub nets. 
- The procedure used in data-gram sub nets for determining the destination of the packet is quite complex. 

Difference #4:
- Virtual circuits allow for reserving the resources in advance on the establishment of the resources.
- This has a great advantage which is that the congestion is avoided in the sub net. 
- However, in the data-gram sub nets, it is quite difficult to avoid congestion. 

Difference #5:
- If a crash occurs in a router, then it will lose its memory. 
- Even if it backs up after sometime, all the virtual circuits that pass via it must be aborted. 
- This is not a major problem in the case of the data-gram sub nets. 
- Here, if the router crashes, the only packets that will have to suffer will be the ones that were queued for that router at that instant of time. 

Difference #6:
- The virtual circuits can vanish as a result of the loss or fault on the current communication line.
- In data-gram sub nets, it is comparatively easy to compensate for the fault or loss on the communication line. 

Difference #7:
- In virtual circuits there is one more cause for the traffic congestion. 
- This cause in the use of the fixed routes for the transmission of the data packets throughout the network. 
- This also leads to the problem of unbalanced traffic. 
- In data gram sub nets the routers are given the responsibility of balancing the traffic over the entire traffic.
- This has been made possible because it is allowed to change the routers halfway between the connections. 

Difference #8:
- Virtual circuits are one way of implementing the connection-oriented services. 
- For various types of data gram sub nets, a number of protocols are defined by the internet protocol. 
- Internet protocol provides the data-gram service at the internet layer. 
- In contrast with the virtual circuits, data gram sub nets are connection-less service. 
- It is the best effort message delivery service but at the same time is very unreliable. 
- There are a number of high level protocols such as TCP that are dependent up on the data gram service of the internet protocol.
- This calls for additional functionality. 
- The data gram service of IP is even used by the UDP. 
- The fragments of a data gram might be referred to as the data packets. 
- The IP and UDP both provide unreliable services and this is why both of them are termed as data grams. 
- The fragments of TCP are referred to as TCP fragments to distinguish it from data-grams. 


Tuesday, June 11, 2013

Analytics - Measuring data relating to user information - Part 7

These series of a posts are about analytics, the measure of information related to a user. The last post in this series (Measuring data relating to user information - Part 6) was about an error in data collection as well as the pitfalls of having a strategy that depended on business decisions being made just on the basis of data collected through analytics. Analytics should be one of the pillars of a decision making strategy, with market research and other factors also contributing to how and why a decision should be made. If you jump into the field of making decisions without having a proper strategy relating to decision making, then there is a good chance that the decision making could go down a path that is faulty or inaccurate.
In some of the previous posts, I have been taking some examples that show where decisions with regard to the product can be taken on the basis of data collected from the product in the hands of customers. In this post, I will take another example of the same - a case where there were certain reports from the customers and the product management was not sure about whether the feedback was proper, and looking for more evidence to substantiate or confirm the problems. So, for this example, there was feedback from several channels that talked about how customers were perceiving that the quality level of the product was not as good as previous versions and this was manifested in more cases of crashes.
Now, this is good feedback, but can you take this as gospel truth. On the surface, this seems like clear feedback that you should recognize and take action accordingly. So, there should be some kind of investigation that would cause you to behave differently from how you have behaved after previous releases, since that will be in customer interest. You would need to commit more effort to investigation and solutions to quality problems; and even though this might seem to be in customer interest, this is a cost to the ongoing product development. And there is the contra view - with more means of expressing discontent such as user forums, community groups and the same, there is the possibility that the quality level is the same as that of previous levels, it's just that the information collection systems are catching more of this discontent.
Now, you are stuck. Both sound good, but you need to take a decision one way or the other. This is where data collection from user systems works good. One of the first items you should be capturing is where information about whether the user application has closed normally, or whether the application has closed abnormally (such as in a crash or when the user was forced to terminate the application after it was hanging) and also capture this information for operating systems that are supported by the application. Further, you would need to do this for the different features and dialogs that are present in the application. Once you have been capturing this information, there is a lot you can do in terms of determining how often a feature or the entire application crashes in the hands of the users, where does the crash happen (this will need a lot of development effort though to determine what causes the crash in the application). This will also help to determine whether the frequency of crash is more than in previous versions of the application release.
Once you have such a data and the data has checked out to be accurate in some respects (for example, if your testing team is getting crashes in reasonably similar areas, then it helps to confirm this data to a large extent), you can make product level decisions. If that means that you need to spend some time on product stability and quality, then you need to do so; otherwise if the quality level seems fine, then you know that the information you are getting from the customers needs to be handled through regular support mechanisms and does not need the development team to spend extra effort.


Friday, April 26, 2013

What is the cause of thrashing? How does the system detect thrashing? Once it detects thrashing, what can the system do to eliminate this problem?


- Thrashing takes place when the sub-system of the virtual memory of the computer system is involved in a state of paging constantly.
- It rapidly exchanges data in the memory with the data available on the disk excluding level of processing of most of the applications. 
- Thrashing leads to the degradation of the performance of the computer or may even cause it to collapse. 
- The problem may further worsen until the issue is identified and addressed. 
- If there are not enough pages available for the job, it becomes very likely that your system will suffer from thrashing since it’s an activity involving high paging. 
- This also leads to high rate of page fault. 
- This in turn cuts down the utilization of the CPU. 
- Modern systems utilize the concept of the paging system for executing many programs.
- However, this is what makes them prone to thrashing. 
- But this occurs only if the system does not have at present sufficient memory as required by the application or if the disk access time is too long. 

- Thrashing is also quite common in the communication systems where the conflicts concerning the internal bus access is common. 
- The order of magnitude or degree by which the latency and throughput of a system might degrade depends up on the algorithms and the configuration that is being used. 
- In systems making use of virtual memory systems, workloads and programs presenting insufficient locality of reference may lead to thrashing. 
- Thrashing occurs when the physical memory of the system is not able to contain in itself the workload or the program. 
- Thrashing can also be called as the constant data swapping.
- Older systems were low end computers i.e., the RAM they had was insufficient to be employed in modern usage patterns. 
- Thus, when their memory was increased they became noticeably faster. 
- This happened because of the availability of more memory which reduce the amount of swapping and thus increased the processing speed. 
- IBM system/ 370 (mainframe computer) faced this kind of situation. 
- In it a certain instruction consisted of an execute instruction pointing over to another move instruction. 
- Both of these instructions crossed the page boundary and also the source from which the data has to be moved and the destination where it was to be placed both crossed the page boundary. 
- Thus, this particular instruction altogether required 8 pages and that too at the same time in memory. 
- Now if the operating system allocated less than 8 pages, a page fault is sure to occur. 
- This page fault will lead to thrashing of all the attempts of restarting the failing instruction. 
- This may even reduce the CPU utilization to almost zero!

How can a system handle thrashing?

For resolving the problem of thrashing, the following things can be done:
1. Increasing the amount of main memory i.e., the RAM in the system. This is the best ever solution for this and will be helpful for a long term also.
2. Decreasing the number of programs to be executed by the system.
3. Replacing the programs that utilize heavy memory with their less memory utilizing equivalents.
4. Making improvements in the spatial locality.

- Thrashing can also occur in cache memory i.e., the faster storage space that is used for speeding up the data access. 
- Then it is called cache thrashing. 
- It occurs when the cache is accessed in a way that it leaves it of no benefit. 
When this happens many main memory locations compete with each other for getting the same cache lines that it turn leads to a large number of cache misses.


Saturday, September 1, 2012

What are the types of Exception available in Win Runner? How do you handle an Exception in WinRunner?


In this article, we talk about the various kinds of exceptions available in winrunner and how to handle them. Basically 4 types of exceptions are available in winrunner and they have been mentioned below:
  1. Pop up exceptions
  2. Object exceptions
  3. TSL exceptions and lastly
  4. Web exceptions
Web exceptions are available only if you have installed the web add- ins.

How are different types of exceptions are handled in WinRunner?

- A pop up exception handler is provided in the wirunner package for handling the pop up exceptions that often show up during the execution of the test scripts during the running of acceptance user test.  
- Winrunner can be made to handle pop ups by making it learn the window and by specifying a handler for the exception. 
- These handlers can be:
  1. User defined handlers: The names of these handlers can be specified by clicking on the user defined function name and changing it according to your own will.
  2. Default actions: Winrunner makes its own choice whether to press ok or cancel option. The desired default handler can be selected in the dialog box.
What if your batch test is executing up on a highly unstable version of the software system or application? 
- Obviously it will crash and you would want to recover the test execution. 
- This is possible only through the TSL exceptions which help in the test recovery by instructing the winrunner to exit the current test and restart the application.
- The winrunner can be easily instructed upon how an unexpected event or error can be handled that may occur in the testing environment while you test your web site. 
Here now we explain how to handle such exceptions. 
- Whenever you load some web test add in, the winrunner can be very well instructed up on how a particular exception can be handled that occurred during a test run in your web site. 
- The simplest example that can be given is of the security alert dialog box that some times appears during the test run. 
- The user can resume the normal testing by clicking on the yes button of the security alert dialog box. 
- All the exceptions that are supported by the winrunner are mentioned in a list and can be viewed in the web exception editor.
- This list can be modified and additional exceptions that you want the winrunner to support can be configured and added to the list. 
- All the new exceptions are to be added to the list of exceptions stored in the web exception editor. 
- You need to go down to the tools menu and select the option of web exception handling.
- This opens the web exception editor. 
- There is a pointing hand, clicking on that adds a new exception to the list. 
- For categorizing the exception you need to select a category in the type list. 
The MSW_class, message and title of the exception are displayed by the editor.
- There is an action list available which provides you the options for carrying out the following execution:
  1. Web_ exception_ handler_ dialog_ click_ default: For activating the default button.
  2. Web_ exception_ handler_ fall_ retry: For reloading the web page as well as activating the default button.
  3. Web_ exception_ enter_ user name_ password: For using the given user id and password.
  4. Web_ exception_ handler_ dialog_ click_ yes: For activating the yes button.
  5. Web_ exception_ handler_ dialog_ click_ no: For activating the no button.
- The other operations that can be carried out in winrunner up on exceptions are defining, modifying, activating and deactivating them. 


Friday, June 22, 2012

How is optimization of smoke testing done?


Smoke testing being one of the quick and dirty software testing methodologies, needs to be optimized. This article focuses on the need of optimizing the smoke tests and how to optimize them. 

Let us first see the scenario behind the need for optimization of the smoke tests that are carried out on the software system or application. 
- In most cases of the development of software systems or applications, the code can be executed either directly from the command prompt or via a sub- routine of the larger software systems or applications. This is called the command prompt approach. 
- The code is designed in such a way that it has the qualities like self awareness as well as it is autonomous. 
- By the code being self aware, we mean that if anything goes wrong during the execution, the code should explain it. 
- Commonly two types of problems are encountered during testing and have been mentioned below:
  1. The path was compiled with too much of optimization and
  2. The data directory is not pointed out properly by the path.

Steps for Optimization of Smoke tests


- In order to confirm that the code was compiled correctly one should run the test suite properly at least once. 
- The first step towards optimization of the smoke test is to run it and then examine the output. 
- There are two possibilities that either the code will pass the test or it won’t. 
- If it is the second case then there are two possibilities that where your smoke test went wrong:
  1. Compiler bugs and errors or
  2. The path has not been set properly.

Compiler Bugs and Errors


Let us take up the first possibility, i.e., the compiler bugs and errors. 
-  It is probable that the correct code might not have been produced by the compiler. 
- In some cases of the serious compiler bugs, it is possible that there might be some hardware exception and these kinds of errors are caused mainly by the compiler bugs. 
- In this case the optimization level of the code should be minimized. 
- After this the code should be recompiled and observed again. 
- Optimizing is good for code but if it is of aggressive kind then it will definitely cause problems.

If path is not set properly


- It is obvious that if the code is not able to trace its data files then it will definitely show up some error and this happens because the path has not been set properly. 
- In such cases you need to check which path was it, fix it and recompile the whole code and execute once again.

When does a system or an application crash?


Don’t think that the software system or application crashes only when there has aggressive optimization of the code! 
- Crashes also happens with those programs in which there is no optimization of the code. 
But in such cases, only compiler errors can be blamed since it happens only if the compiler has not been set up properly on your system. 
- If no program executes, it means that your compiler is broken and you need to talk about this to your system administrator.

How to optimize smoke tests?


- A lot of help comes from MPGO (managed profile guided optimization). 
- The best way to optimize any kind of testing is to maintain a balance between the automated and manual testing.
- You need to run MPGO tool along with the necessary parameters for the test and then run the test. The test now will be optimized. 
- It is actually the internal binaries that are optimized either fully or partially. 
- Partially optimized binaries are deployed only in automated smoke testing. 


Thursday, February 23, 2012

What is meant by severity of a bug? What are different types of severity?

We all know what a software bug is! It is a flaw, error or mistake in the software system or application that can cause it to crash or fail. Pretty much simple!
But very few of us are actually aware about the severity of a bug i.e., how much destruction it can cause to a software system or application.

- Bugs are of course results of the mistakes made by the software programmers and developers while coding the software program.
- Sometimes incorrect compilation of the source code by the program can also cause bugs.
- A buggy program is very hard to clean.
- Bugs can have a chain reaction also i.e., one bug giving rise to another and that in turn giving rise to one more bug and so on.
- Each bug has its own level of severity that it causes to the software system or application.
- While some bugs can work out total destruction of the program, there are some bugs that do not even come in to detection.
- Some bugs can cause the program to go out of service.
- In contrast to these harmful bugs, there are other bugs which are useful such as security bugs.

WHAT IS SEVERITY OF A BUG & ITS TYPES

-"Severity can be thought of as a measure of the harm that can be caused by a bug."
- Severity is an indication of how bad or harmful a bug is.
- The higher the severity of a bug, the more priority it seeks.
- Severity of the bugs of software can sometimes be used as a measure of its overall quality.
- Severity plays a major role in deciding the priority of fixing the bug.
- It is important that the severity of the bugs is assigned in a way that is logical and easy to understand.

There are several criteria depending on which the severity of a bug is measured. Below mentioned is one of the most commonly used ranking scheme for measuring severity of bugs:

1.Severity 1 Bugs
bugs coming under this category cease the meaningful operations that are being operated by a software program or application.

2.Severity 2 Bugs
Bugs coming under this category cause the failure of the software features and functionalities. But, still the application continues to run.

3.Severity 3 Bugs
Bugs coming under this category can cause the software system or application to generate unexpected results and behave abnormally. These bugs are responsible for inconsistency of the software system.

4.Severity 4 Bugs
Bugs coming under these categories basically affect the design of a software system pr application.

COMPONENTS OF SEVERITY
Severity has two main components namely the following:

1. Impact
- It is a measure of the disruption that is caused to the users when they encounter a bug while working.
- There is a certain level to which there is an interference with the user performing a task.
- Even the impact is classified in to various levels.

2. Visibility
- It is the measure of the probability of encountering the bug in future or we can say that it is measure of the closeness of a bug to the execution path.
- It is the frequency of the occurrence of a bug.

The severity is calculated as the product of both the impact as well as visibility. A measure of perceived quality and usefulness of the software product is given by the severity. Therefore it would not be wrong to say that the severity provides an overall measure of the quality of the software system or application.


Wednesday, January 4, 2012

What are different aspects of stress testing?

Stress testing can be defined as a form of testing that is carried out to determine the stability and the stress handling capacity of a software system or module. Stress testing is all about testing the software system or application beyond the normal operational capacities. It is the testing of software system or application to its breaking or fatal point. This is done in order to observe the results.

Stress testing has a much broader meaning. What is basically understood by a stress test?
- It is referred to a test that mainly focuses upon the availability, error handling and robustness of a software system or application.
- In stress testing the software system or application is subjected heavy loads of tasks.
- It is not about considering the proper behavior under the normal operational conditions or user environment.

Typically the goal of stress testing is to test whether or not the software system or application crashes or fails in the case of catastrophic problems like unavailability of sufficient computing resources. These computational resources may include disk space or memory. It is also done to determine if the system crashes or fails under the situation of denial of service attacks and unusually high concurrency.

Stress testing, load testing, volume testing all seem like much similar kinds of testing.

A look at the following examples of stress testing will clear up the confusions regarding stress testing:


- Stress testing for web server:
A web server can be subjected to stress testing using bots, scripts and several denial of service tools to determine its performance and behavior during the peak data and tasks load.

- Stress testing can be studied in contrast with load testing.
Load testing is basically carried out to examine the entire testing environment and the huge database. It is also carried out to determine the response time of the software system or application whereas the stress testing exclusively focuses upon identifying the transactions and pushing them to a level at which a break occurs in the execution of the transaction software system.

Another point is that during the stress testing if the transactions are duly stressed, then the chance is that the database may not experience a huge data load. However if the transactions are not stressed, the data base may experience a heavy work load.

SOME IMPORTANT POINTS:
- Stress testing is another word for system stress testing.
- It can be defined as the loading of the concurrent users beyond and over the level that can be handled by the system.
- This leads to the breakage of the weakest link in the whole software system or application.
- While carrying out the stress testing the software engineers, developers or testers need to test the whole software system or application under the desired expected stress as well as under the accelerated stress.
- The goal here is to determine the working life of the software system.
- It is also aimed at determining the modes of failure for the software system or application.
- For the hardware counterpart of a complete system, the stress testing can be defined as the subjecting of the concerned hardware to the exaggerated levels of stress.
- This is basically done to determine the stability of the hardware system when used in a normal environment rather than a testing environment.
- Before modifying the CPU parameters during the processes of over clocking, over volting, under volting and under clocking, it is necessary to verify whether or not the new CPU parameters like frequency and core voltage are suitable for taking the heavy CPU loads.
- Stress testing for such parameters is usually carried out by executing a CPU intensive program for a prolonged significant period of time. It is observed if the system crashes or hangs.


Wednesday, December 14, 2011

What are different characteristics of recovery testing?

Recovery testing itself makes clear what it is by through its name. We all know what recovery means. To recover means to return to the normal state after some failure or illness etc. This qualitative aspect is also present in today’s software system or applications.

- The recovery of a software system or application is defined as its ability to recover back form some hardware failure, crashes and similar such problems that are quite frequent with computers.
- Before the release of any software it needs to be tested for its recovery factor. This is done by recovery testing.
- Recovery testing can be defined as the testing of software system or application to determine its ability to recover fatal system crashed and hardware problems.

One should always keep one thing in mind which is that recovery testing is not to be confused with reliability testing since reliability testing aims at discovering the points at which the software system or application tends to fail.

- In a typical recovery testing, the system is forced to fail or crash or hang in order to check how the recovery asset of the software system or application is responding and how much strong it is.
- The software system or application is forced to fail in a variety of ways.
- Every attempt is made to discover the failure factors of the software system or application.

Objectives of Recovery Testing
- Apart from the recovery factor, the recovery testing also aims at determining the speed of recovery of the software system.
- It aims to check how fast the software system or application is able to recover from a failure or crash.
- It also aims to check how better the system recovers.
- It checks the quality of the recovered software system or application. There is some type and extent to which the software is recovered.
- The types and extent are mentioned in the documentation in the requirements and specifications section.
- Recovery testing is all about testing the recovering ability of the software system or application i.e., how well it recovers from the catastrophic problems, hardware failures and system crashes etc.

The following examples will further clarify the concept of recovery testing:

1. Keep the browser in runny mode and assign it multiple sessions. Then just restart your system. After the system has booted in, check whether the browser is able to recover all of the sessions that were running previously before the restart. If the browser is able to recover, then it is said to have good recovering ability.

2. Suddenly restart your computer while an application is in running mode. After the boot in session check whether the data which was being worked upon by the application is still integrate and valid or not? If the data is still valid, integrate and safe the application has a great deal of recovery factor.

3. Set some application like file downloader or similar to that on data receiving or downloading mode. Then just unplug the connecting cable. After a few minutes plug in the cable back and let the application resume its operation and check whether the application is still able to receive the data from the point where it was left. If its not able to resume the data receiving then its said to have a bad recovery factor.

Recovery testing tests the ability of application software to restart the operations that were running just before the loss of the integrity of the applications. The main objective of recovery testing is to ensure that the applications continue to run even after the failure of the system.

Recovery testing ensures the following:
- Data is stored in a preserved location.
- Previous recovery records are maintained.
- Development of a recovery tool which is available all the time.


Tuesday, December 6, 2011

What are different characteristics of destructive testing?

Nothing checks the robustness of a software system or application better than destructive testing.
- Destructive testing is basically to determine the robustness of a software system or application.
- In order to determine the robustness of a software system or application, it is subjected to very harsh tests and attempts are made to make the program crash or hang.
- The aim of the destructive testing is to cause the failure of the program.
- The destructive testing has a typical process procedure. The process has the following aspects:

Waterfall development model or CMMI traditional model

- It is always been a common practice of performing software testing on software system by an independent group of software testers after the functionalities of the software system have been fully developed. D
- Destructive testing is carried out before it is delivered to the client or the customer.
- In this kind of practice, the testing stage is also used as a project buffer in order to compensate for project delays.
- But, this results in compromising with the time being consumed for testing.
- In the same model, there is another testing in which the software testing is started simultaneously with the start of the project and it is continued as a regular process until the project is finished.

Extreme development model or Agile model

- The extreme development model and the agile software development model follow the test driven software development model.
- Following this procedure, unit tests are carried out first by the software developers or engineers.
- These unit tests fail initially and this is expected also.
- As the code is written and subjected to these tests, more and more units of the software system pass those unit tests successfully.
- The test cases being used in the testing are updated regularly for the new faults and errors discovered.
- The tests cases thus updated are also made more efficient with the addition of the regression tests developed during the whole process.
- Unit tests are carried out simultaneously with the development and progress of the source code.
- They eventually become an integral part of the build process.
- This model aims at achieving continuous deployment i.e., to say that the software updates can be easily released for the public.

Although there exist some variations between the different models of testing, the testing cycle is typical and same for every kind of model. The following steps are involved in the testing cycle:

- Requirement analysis
- Test planning
- Test development
- Test execution
- Test reporting
- Test result analysis
- Defect retesting
- Regression testing
- Test closure


Wednesday, September 1, 2010

What is Recovery Testing and what are its features.

Recovery testing tells how well an application is able to recover from a crash, hardware failure. Recovery testing should not be confused with reliability testing, which tries to discover the specific point at which failure occurs.
- Recovery is ability to restart the operation after integrity of application is lost.
- The time taken to recover depends upon the number of restart points, volume of application, training and skill of people conducting recovery activities and the
tools available for recovery.
- Recovery testing ensures that the operations can be continued after a disaster.
- Recovery testing verifies recovery process and effectiveness of recovery process.
- In recovery testing, adequate back up data is preserved and kept in secure location.
- Recovery procedures are documented.
- Recovery personnel have been assigned and trained.
- Recovery tools have been developed and are available.

To use recovery testing, procedures, methods, tools and techniques are assessed to evaluate the adequacy. Recovery testing can be done by introducing a failure in the system and check whether the system is able to recover. A simulated disaster is usually performed on one aspect of application system. Recovery testing should be carried for one segment and then on the other segment when there are many failures.

Recovery testing is used when the continuity of the system is needed inorder for system to perform or function properly.User estimates the losses, time span to carry out recovery testing. Recovery testing is done by system analysts, testing professionals and management personnel.


Facebook activity