Subscribe by Email


Showing posts with label Error correction. Show all posts
Showing posts with label Error correction. Show all posts

Friday, December 2, 2011

What are different characteristics of regression testing?

The word regression means to relapse to an under develop or a less perfect state. So, we can deduce from this that the regression testing is a kind of testing that basically discovers and un-hide the hidden and new errors and flaws, after the modifications have been made in the functions, operations, patches etc. of the software system.

- Regression testing is also carried out after the software system has been configured after checking away the errors.
- Regression testing may seem like some kind of exploitation of the software system. But, it is aimed at a good objective of ensuring that any new modification or configuration did not cause some other new bugs and errors.

- It is very difficult for the software developer to figure out how a particular change that he is going to introduce will affect all the other parts of the software system. Therefore, the regression testing becomes quite necessary to maintain the quality and standard of the program.

- Sometimes, fixing an error causes some other error in some part of the software system and it will remain unattended and uncorrected until and unless regression testing is carried out.

- Regression testing can only trace and locate such hidden errors and bugs.

There are several methods for conducting regression testing.One of those methods includes re-running the tests again and again for a certain number of times and observe if the behavior, response and output of the software system has changed or not.

Regression testing is not as laborious as the other testing methodologies are.

- The hard work for regression testing can be reduced efficiently to minimum level by selecting an appropriate combination of minimum number of specified test cases for testing the software system.
- The tests should be selected as such as to give maximum coverage to the modification or the change that has to be tested.
- It has been observed that the software program being fixed leads to the re- emergence of the errors and bugs and sometimes new errors are also created.
- It happens so because the usage of the software system over the time leads to the loss of an error or a bug fix due to the poor software handling practices.

We can conclude that any fix is a bit fragile in the sense that it fixes a bug or an error temporarily but arises again after repeated usage of the program. Usually a bug fix in one part of the program causes more bugs in the other parts of the program. So more often, the software system is needs to be redesigned.

The testers require good coding skills and practices.
- Regression testing is followed through manual testing procedures and other programming methodologies.
- Regression testing is usually done by automated tools. Such an environment allows the regression testing to automatically run the test cases and report any errors and bugs.
- Extreme software programming considers regression testing as one of its most crucial integral part.
- During each stage of the cycle for software development, the design documentation is replaced by the automated testing of the whole software system. - Regression testing tests the scale of correctness of a software system and maintains its quality and keeps the output standard.
- A software development should always and compulsorily contain regression testing stage.


Monday, August 16, 2010

Overview of Error Handling Testing and its use.

Error handling refers to the anticipation, detection, and resolution of programming, application, and communications errors. Development errors occur in the form of syntax and logic errors can be prevented. A run-time error takes place during the execution of a program, and usually happens because of adverse system parameters or invalid input data.

The objectives of error handling testing are :
- Application system recognizes all expected error conditions.
- Accountability of processing errors has been assigned and procedures provide a high probability that errors will be properly corrected.
- During correction process reasonable control is maintained over errors.

How to use


- A group of knowledgeable people are required to anticipate what can go wrong in the application system.
- All the application knowledgeable people assemble to integrate their knowledge of user area, auditing and error tracking is needed.
- Logical test error conditions should be created based on this assimilated information.

Error handling testing can be used throughout the SDLC. The impact that the errors produce should be judged and steps should be taken to reduce them to an acceptable level. It is used to assist in error handling management process.


Saturday, March 13, 2010

Data Link Layer - Layer 2 of OSI model

The Data Link Layer is Layer 2 of the seven-layer OSI model of computer networking.
At this layer, data packets are encoded and decoded into bits. It furnishes transmission protocol knowledge and management and handles errors in the physical layer, flow control and frame synchronization.

The data link layer performs various functions depending upon the hardware protocol used, but has four primary functions:

- Communication with the Network layer above.
- Communication with the Physical layer below.
- Segmentation of upper layer datagrams (also called packets) into frames in sizes that can be handled by the communications hardware.
- The data link layer organizes the pattern of data bits into frames before transmission. The frame formatting issues such as stop and start bits, bit order, parity and other functions are handled here.
- It provides error checking by adding a CRC to the frame, and flow control.
- The data link layer is also responsible for logical link control, media access control, hardware addressing, error detection and handling and defining physical layer standards.
- The data link layer is divided into two sublayers: the media access control (MAC) layer and the logical link control (LLC) layer. The former controls how computers on the network gain access to the data and obtain permission to transmit it; the latter controls packet synchronization, flow control and error checking.
- The data link layer is where most LAN (local area network) and wireless LAN technologies are defined. Technologies and protocols used with this layer are Ethernet, Token Ring, FDDI, ATM, SLIP, PPP, HDLC, and ADCCP.
- The data link layer is often implemented in software as a driver for a network interface card (NIC). Because the data link and physical layers are so closely related, many types of hardware are also associated with the data link layer.
- Data link layer processing is faster than network layer processing because less analysis of the packet is required.
- The Data Link layer also manages physical addressing schemes such as MAC addresses for Ethernet networks, controlling access of any various network devices to the physical medium.


Monday, December 14, 2009

Hamming Distance (HD)

Hamming distance (Hamming metric) In the theory of block codes intended for error detection or error correction, the Hamming distance d(u, v) between two words u and v, of the same length, is equal to the number of symbol places in which the words differ from one another. If u and v are of finite length n then their Hamming distance is finite since d(u, v) ← n.
It can be called a distance since it is non-negative, nil-reflexive, symmetric, and triangular:
0 ← d(u, v)
d(u, v) = 0 iff u = v
d(u, v) = d(v, u)
d(u, w) ← d(u, v) + d(v, w)
The Hamming distance is important in the theory of error-correcting codes and error-detecting codes: if, in a block code, the codewords are at a minimum Hamming distance d from one another, then
(a) if d is even, the code can detect d – 1 symbols in error and correct ½d – 1 symbols in error;
(b) if d is odd, the code can detect d – 1 symbols in error and correct ½(d – 1) symbols in error.

How to Calculate Hamming Distance ?
- Ensure the two strings are of equal length. The Hamming distance can only be calculated between two strings of equal length.
String 1: "1001 0010 1101"
String 2: "1010 0010 0010"
- Compare the first two bits in each string. If they are the same, record a "0" for that bit. If they are different, record a "1" for that bit. In this case, the first bit of both strings is "1," so record a "0" for the first bit.
- Compare each bit in succession and record either "1" or "0" as appropriate.
String 1: "1001 0010 1101"
String 2: "1010 0010 0010"
Record: "0011 0000 1111"
- Add all the ones and zeros in the record together to obtain the Hamming distance.
Hamming distance = 0+0+1+1+0+0+0+0+1+1+1+1 = 6


Error Detection Methods Cont...

- Cyclic Redundancy Check (CRC) :
This error detection method computes the remainder of a polynomial division of a generator polynomial into a message. The remainder, which is usually 16 or 32 bits, is then appended to the message. When another remainder is computed, a non-zero value indicates an error. Depending on the generator polynomial's size, the process can fail in several ways, however, it is very difficult to determine how effective a given CRC will be at detecting errors. The probability that a random code word is valid (not detectable as an error), is completely a function of the code rate: 1 - 2-(n - k). Where n is the number of bits of formed from k original bits of data ,(n - k) is the number of redundant bits, r.
Use of the CRC technique for error correction normally requires the ability to send retransmission requests back to the data source.

- Hamming distance based checks :
If we want to detect d bit errors in an n bit word we can map every n bit word into a bigger n+d+1 bit word so that the minimum Hamming distance between each valid mapping is d+1. This way, if one receives a n+d+1 word that doesn't match any word in the mapping (with a Hamming distance x <= d+1 from any word in the mapping) it can successfully detect it as an erroneous word. Even more, d or fewer errors will never transform a valid word into another, because the Hamming distance between each valid word is at least d+1, and such errors only lead to invalid words that are detected correctly. Given a stream of m*n bits, we can detect x <= d bit errors successfully using the above method on every n bit word. In fact, we can detect a maximum of m*d errors if every n word is transmitted with maximum d errors.


Error Detection and Correction

Error detection and correction are techniques to ensure that data is transmitted without errors, even across unreliable media or networks. Error detection is the ability to detect the presence of errors caused by noise or other impairments during transmission from the transmitter to the receiver. Error correction is the additional ability to reconstruct the original, error-free data.
Because of the extremely low bit-error rates in data transmissions, most error detection methods and algorithms are designed to address the detection or correction of a single bit error.

Data Detection Methods :
Errors introduced by communications faults, noise or other failures into valid data, especially compressed data were redundancy has been removed as much as possible, can be detected and/or corrected by introducing redundancy into the data stream.

- Parity Schemes
A parity bit is an error detection mechanism that can only detect an odd number of errors. The stream of data is broken up into blocks of bits, and the number of 1 bits is counted. Then, a "parity bit" is set (or cleared) if the number of one bits is odd (or even). (This scheme is called even parity; odd parity can also be used.) If the tested blocks overlap, then the parity bits can be used to isolate the error, and even correct it if the error affects a single bit.
There is a limitation to parity schemes. A parity bit is only guaranteed to detect an odd number of bit errors (one, three, five, and so on). If an even number of bits (two, four, six and so on) are flipped, the parity bit appears to be correct, even though the data is corrupt.

- Checksum
A checksum of a message is an arithmetic sum of message code words of a certain word length, for example byte values, and their carry value. The sum is negated by means of ones-complement, and stored or transferred as an extra code word extending the message. On the receiver side, a new checksum may be calculated from the extended message. If the new checksum is not 0, an error has been detected.


Friday, July 31, 2009

Quick Tech Lesson: Overview Of The Data Link Layer

The task of data link layer is to convert the raw bit stream offered by the physical layer into a stream of frames for use by the network layer. Various framing methods are used , including character count, character stuffing, and bit stuffing. Data link protocols can provide error control to retransmit damaged or lost frames. To prevent a fast sender from overrunning a slower receiver, the data link protocol can also provide flow control. The sliding window mechanism is widely used to integrate error control and flow control in a convenient way.
Sliding window protocols can be categorized by the size of the sender's window and the size of the receiver's window. When both are equal to 1, the protocol is stop-and -wait. When the sender's window is greater than 1, for example to prevent the sender from blocking on a circuit with a long propagation delay, the receiver can be programmed either to discard all frames other than the next one in sequence (protocol 5) or buffer out of order frames until they are needed (protocol 6).
Protocols can be modeled using various techniques to help demonstrate their correctness. Finite state machine models and Petri net models are commonly used for this purpose.
Many networks use one of the bit-oriented protocols-SDLC, HDLC, ADCCP, or LAPB at the data link level. All of these protocols use flag types to delimit frames, and bit stuffing to prevent flag bytes from occurring in the data. All of them also use a sliding window for flow control. The Internet uses SLIP and PPP as data link protocols. ATM systems have their own simple protocol, which does a bare minimum of error checking and no flow control.


Facebook activity