Errors are a major headache to the software programmers, developers as well as testers. They cause the whole software system or application to falter, produce unexpected results and behave abnormally. Some errors cause more harm while some cause less, some are easy to discover whereas some are hideous, some are as active and disruptive like a volcano and others are dormant. Therefore error handling becomes an important factor in deciding the success of a program.
WHAT IS MEANT BY ERROR HANDLING?
- Error handling is the way of a program to handle the errors that disturb its functioning.
- The error handling procedure should be very strong and smart.
- Error handling requires a lot decision making.
- The error handling process like other processes also is a victim of defects.
STEPS IN ERROR HANDLING PROCESS & DEFECT CAUSING FACTORS
1. The main steps involved in an Error handling process are namely detection, anticipation and resolution of the errors that occur during the execution of the software program or application.
2. Some applications even employ programs called “error handlers” developed specially for handling the errors.
3. A software system or application is said to have good error handling capabilities if it is able to recover from the errors without causing the whole program to terminate or if it is not able to handle that error, properly terminates the program without causing any data loss.
4. Such forceful termination is nothing but an error handling defect.
5. The basic factors causing the run time errors are invalid input data and adverse function parameters.
6. Lack of memory is another defect causing factor.
A Software application comprises of various small programs.These programs may conflict with each other during the run time. Similarly web applications also experience due to electrical noise and malware or undue pressure on the server.
ERROR HANDLING PROCESS
A software system or application can overcome these errors by its error handling process. But this error handling process also faces some risks from any defects in its source code. Thus we can define the error handling defects as the defects that reduce the efficiency of the error handling process.
1. On the initiation of the error handling process, the discrepancy between the expected behavior and actual behavior is identified.
2. Whenever there is some discrepancy in the behavior of the program, a defect is created.
3. The test script that was being executed at the time of encounter of the defect is tested.
4. This process is called defect creation.
5. After this, the discovered defect is verified i.e., whether or not the defect is valid.
6. A severity level is assigned to the defect.
7. This severity level indicates the impact and visibility of the defect on the program.
8. The defect can cause the core functionality to go out of order or stop working.
9. It can affect the operational environment.
10. Such defects prevent the user from accessing the features and functionalities of the software system or application.
11. Incorrect navigation links are also a defect.
12. According to the level of the severity the encountered errors can cause, they are assigned priorities.
13. This process is defect prioritization.
14. Several priority codes have been defined.
15. There are some defects that do not even allow the testing to take place.
16. Defects causing such errors are given the highest priority.
17. The defect is once again confirmed and this process is called defect confirmation.
18. After the defect confirmation the defect is analyzed, the affected code is redesigned, developed and tested again for any shortcomings.
19. This process following the defect confirmation is called defect resolution.
20. The defects after being resolved are once again reviewed by the developer and certain test scripts are run to confirm that the defect has been resolved.
21. After the verification the defect is closed.
Friday, March 2, 2012
What are different error handling defects?
Posted by
Sunflower
at
3/02/2012 02:36:00 PM
0
comments
Labels: Application, Data, Decision, Defects, Detection, Error Detection, Error handling, Errors, Factors, Function, Input, Open Closed, Recover, Software Systems, Steps, Terminate, Test Scripts, WebApp
![]() | Subscribe by Email |
|
Friday, September 3, 2010
Mutation Testing : how it is performed, benefits, operators and tools.
Mutation Testing is a powerful method for finding errors in software programs. Mutation testing involves deliberately altering a program’s code, then re-running a suite of valid unit tests against the mutated program. A good unit test will detect the change in the program and fail accordingly. Mutation testing is expensive to run, especially on very large applications. Mutation Testing is complicated and time-consuming to perform without an automated tool.
How Mutation testing is performed?
- Create a mutant software which is different from the original software by one mutation.
- Each of the mutant software has one fault.
- Test cases are applied to the original software and the mutant software.
- Results are evaluated. The test case that is applied is wrong if the mutant software as well as the original software produces the same result. The test case is right if the test case detects fault in the software.
Benefits of Mutation Testing
- Introduces a new level of error detection.
- Uncover errors in code that were previously thought impossible to detect automatically.
- The customer will receive a more reliable and bug free software.
On what factors Mutation testing depends ?
- It depends heavily on the types of faults that the mutation operators are designed to represent.
- Mutation operators means certain aspects of the programming techniques, the slightest change in which may cause the program to function incorrectly.
Mutation Operators and Tools
Some mutation operators for languages like Java, C++ etc. are :
- Changing the access modifiers, like public to private etc.
- Static modifier change.
- Argument order change.
- Super keyword deletion.
- Essay writing services
Tools like Jester, Pester, Nester and Insure++ are some of the tools that are available for mutation testing.
Posted by
Sunflower
at
9/03/2010 07:20:00 PM
0
comments
Labels: Advantages, Benefits, Error Detection, Errors, Factors, Mutant, Mutation testing, Mutators, Operators, Performance, Software, Testing tools
![]() | Subscribe by Email |
|
Saturday, March 13, 2010
Data Link Layer - Layer 2 of OSI model
The Data Link Layer is Layer 2 of the seven-layer OSI model of computer networking.
At this layer, data packets are encoded and decoded into bits. It furnishes transmission protocol knowledge and management and handles errors in the physical layer, flow control and frame synchronization.
The data link layer performs various functions depending upon the hardware protocol used, but has four primary functions:
- Communication with the Network layer above.
- Communication with the Physical layer below.
- Segmentation of upper layer datagrams (also called packets) into frames in sizes that can be handled by the communications hardware.
- The data link layer organizes the pattern of data bits into frames before transmission. The frame formatting issues such as stop and start bits, bit order, parity and other functions are handled here.
- It provides error checking by adding a CRC to the frame, and flow control.
- The data link layer is also responsible for logical link control, media access control, hardware addressing, error detection and handling and defining physical layer standards.
- The data link layer is divided into two sublayers: the media access control (MAC) layer and the logical link control (LLC) layer. The former controls how computers on the network gain access to the data and obtain permission to transmit it; the latter controls packet synchronization, flow control and error checking.
- The data link layer is where most LAN (local area network) and wireless LAN technologies are defined. Technologies and protocols used with this layer are Ethernet, Token Ring, FDDI, ATM, SLIP, PPP, HDLC, and ADCCP.
- The data link layer is often implemented in software as a driver for a network interface card (NIC). Because the data link and physical layers are so closely related, many types of hardware are also associated with the data link layer.
- Data link layer processing is faster than network layer processing because less analysis of the packet is required.
- The Data Link layer also manages physical addressing schemes such as MAC addresses for Ethernet networks, controlling access of any various network devices to the physical medium.
Posted by
Sunflower
at
3/13/2010 07:42:00 PM
1 comments
Labels: Communication, Data Link layer, Error correction, Error Detection, Errors, Layer 2 Datagrams, Layers, OSI, Packets, Pattern
![]() | Subscribe by Email |
|
Monday, December 14, 2009
Hamming Distance (HD)
Hamming distance (Hamming metric) In the theory of block codes intended for error detection or error correction, the Hamming distance d(u, v) between two words u and v, of the same length, is equal to the number of symbol places in which the words differ from one another. If u and v are of finite length n then their Hamming distance is finite since d(u, v) ← n.
It can be called a distance since it is non-negative, nil-reflexive, symmetric, and triangular:
0 ← d(u, v)
d(u, v) = 0 iff u = v
d(u, v) = d(v, u)
d(u, w) ← d(u, v) + d(v, w)
The Hamming distance is important in the theory of error-correcting codes and error-detecting codes: if, in a block code, the codewords are at a minimum Hamming distance d from one another, then
(a) if d is even, the code can detect d – 1 symbols in error and correct ½d – 1 symbols in error;
(b) if d is odd, the code can detect d – 1 symbols in error and correct ½(d – 1) symbols in error.
How to Calculate Hamming Distance ?
- Ensure the two strings are of equal length. The Hamming distance can only be calculated between two strings of equal length.
String 1: "1001 0010 1101"
String 2: "1010 0010 0010"
- Compare the first two bits in each string. If they are the same, record a "0" for that bit. If they are different, record a "1" for that bit. In this case, the first bit of both strings is "1," so record a "0" for the first bit.
- Compare each bit in succession and record either "1" or "0" as appropriate.
String 1: "1001 0010 1101"
String 2: "1010 0010 0010"
Record: "0011 0000 1111"
- Add all the ones and zeros in the record together to obtain the Hamming distance.
Hamming distance = 0+0+1+1+0+0+0+0+1+1+1+1 = 6
Posted by
Sunflower
at
12/14/2009 07:01:00 PM
0
comments
Labels: Binary Data, Calculate, Definition, Error correction, Error Detection, Errors, Hamming Distace
![]() | Subscribe by Email |
|
Error Detection Methods Cont...
- Cyclic Redundancy Check (CRC) :
This error detection method computes the remainder of a polynomial division of a generator polynomial into a message. The remainder, which is usually 16 or 32 bits, is then appended to the message. When another remainder is computed, a non-zero value indicates an error. Depending on the generator polynomial's size, the process can fail in several ways, however, it is very difficult to determine how effective a given CRC will be at detecting errors. The probability that a random code word is valid (not detectable as an error), is completely a function of the code rate: 1 - 2-(n - k). Where n is the number of bits of formed from k original bits of data ,(n - k) is the number of redundant bits, r.
Use of the CRC technique for error correction normally requires the ability to send retransmission requests back to the data source.
- Hamming distance based checks :
If we want to detect d bit errors in an n bit word we can map every n bit word into a bigger n+d+1 bit word so that the minimum Hamming distance between each valid mapping is d+1. This way, if one receives a n+d+1 word that doesn't match any word in the mapping (with a Hamming distance x <= d+1 from any word in the mapping) it can successfully detect it as an erroneous word. Even more, d or fewer errors will never transform a valid word into another, because the Hamming distance between each valid word is at least d+1, and such errors only lead to invalid words that are detected correctly. Given a stream of m*n bits, we can detect x <= d bit errors successfully using the above method on every n bit word. In fact, we can detect a maximum of m*d errors if every n word is transmitted with maximum d errors.
Posted by
Sunflower
at
12/14/2009 04:33:00 PM
0
comments
Labels: CRC, Cyclic Redundancy Check, Error correction, Error Detection, Errors, Hamming Distace, HC
![]() | Subscribe by Email |
|
Error Detection and Correction
Error detection and correction are techniques to ensure that data is transmitted without errors, even across unreliable media or networks. Error detection is the ability to detect the presence of errors caused by noise or other impairments during transmission from the transmitter to the receiver. Error correction is the additional ability to reconstruct the original, error-free data.
Because of the extremely low bit-error rates in data transmissions, most error detection methods and algorithms are designed to address the detection or correction of a single bit error.
Data Detection Methods :
Errors introduced by communications faults, noise or other failures into valid data, especially compressed data were redundancy has been removed as much as possible, can be detected and/or corrected by introducing redundancy into the data stream.
- Parity Schemes
A parity bit is an error detection mechanism that can only detect an odd number of errors. The stream of data is broken up into blocks of bits, and the number of 1 bits is counted. Then, a "parity bit" is set (or cleared) if the number of one bits is odd (or even). (This scheme is called even parity; odd parity can also be used.) If the tested blocks overlap, then the parity bits can be used to isolate the error, and even correct it if the error affects a single bit.
There is a limitation to parity schemes. A parity bit is only guaranteed to detect an odd number of bit errors (one, three, five, and so on). If an even number of bits (two, four, six and so on) are flipped, the parity bit appears to be correct, even though the data is corrupt.
- Checksum
A checksum of a message is an arithmetic sum of message code words of a certain word length, for example byte values, and their carry value. The sum is negated by means of ones-complement, and stored or transferred as an extra code word extending the message. On the receiver side, a new checksum may be calculated from the extended message. If the new checksum is not 0, an error has been detected.
Posted by
Sunflower
at
12/14/2009 03:47:00 PM
0
comments
Labels: Checksum, Error correction, Error Detection, Errors, Methods, Parity
![]() | Subscribe by Email |
|