Subscribe by Email


Showing posts with label Faults. Show all posts
Showing posts with label Faults. Show all posts

Friday, April 19, 2013

What is Paging? Why it is used?


- Paging is a very important concept for the computer operating systems required for managing the memory. 
- It is essentially a memory management scheme which is used for storing as well as retrieving data from the secondary memory devices.
- Under this scheme, the data is retrieved from the secondary storage devices and handed over to the operating systems. 
- The data is in the form of blocks all having the same size. 
- These data blocks are called as the pages. 
- In paging, for a process the physical address space can be kept as non–contiguous itself. 
- Paging is a very important concept for implementing the virtual memory in the operating systems designed for contemporary and general use. 
- This allows the disk storage to be used for the data that is not able to fit in to the RAM. 
- The main functions of the paging technique are carried out when a program attempts to access the pages that have no mapping to the physical RAM. 
- This situation is commonly known as the page fault. 
- In this situation, the OS comes to take control of the error. 
- This is done in a way that is invisible to the application. 

The operating system carries out the following tasks in paging:
Ø  Locates the data address in an auxiliary storage.
Ø Obtains a vacant page frame in the physical memory to be used for storing the data.
Ø  Loads the data requested by the application in to the page frame obtained in the previous step.
Ø  Make updates to the page table for showing the new data.
Ø Gives back the execution control to the program.This maintains a transparency. it again tries to execute the instruction because of which the fault occurred.

- If space is not available on RAM for storing all the requested data, then another page from RAM cannot be removed. 
- If all of the page frames are filled up, then a page frame can be obtained from the table which contains data that will be shortly emptied. 
- A page frame is said to become dirty if it is modified since its last read operation in to the RAM. 
- In such a case it has to be written back in to its original location in the drive before it is freed. 
- If this is not done, a fault will occur which will require obtaining an empty frame and reading the contents from drive in to this page. 
- The paging systems must be efficient so as to determine which frames are to be emptied. 
- Presently many page replacement algorithms have been designed for accomplishing this task. 
- Some of the mostly used for replacement are:
Ø  LRU or least recently used
Ø  FIFO or first in first out
Ø  LFU or least frequently used.

- To further increase responsiveness, paging systems may employ various strategies to predict which pages will be needed soon. 
- Such systems will attempt to load pages into main memory preemptively, before a program references them. 
- When demand paging is used, paging takes place only when some data request and not prior to it. 
- In a demand pager, execution of a program begins with none of the pages loaded in to the RAM. 


Thursday, March 21, 2013

What are principles of autonomic networking?


The complexity, dynamism, heterogeneity and so on are on ever rise. All these factors are making the infrastructure of our network insecure, brittle and un – manageable. Today’s world is so dependent on networking that its security and management cannot be risked. In terms of networking, we call this the ‘autonomic networking’. 
The goal of building such systems is to realize such network systems that have capability of managing themselves as per the high level guidance provided by the humans. But meeting this goal calls for a number of scientific advances and newer technologies.

Principles of Autonomic Networking

A number of principles, paradigms and application designs need to be considered.

Compartmentalization: This is a structure having extensive flexibility. The makers of autonomic systems prefer this instead of a layering approach. This is the first target of the autonomic networking.

Function re–composition: An architectural design has been envisioned that would provide highly dynamic, autonomic and flexible formation of the networks on a large – scale. In such architecture, the functionality would be composed in a fashion that is autonomic.

Atomization: The functionality are broken down in to smaller atomic units. Maximum re - composition freedom is made possible by these atomic units.

Closed control loop: This is one of the fundamental concepts of the control theory. It is now also counted among the fundamental principles of the autonomic networking. This loop is known for controlling and maintaining the properties of the controlled system as per the desired bounds. The target parameters are constantly monitored within the desired bounds.

The human autonomic nervous system is what that inspires the autonomic computing paradigm. An autonomic computing paradigm must then have a mechanism by virtue of which it can change its behavior according to the change in various essential variables in the environment and bring it back itself in to the state of equilibrium. 
Survivability can be viewed in the terms of following in case of autonomic networking:
  1. Ability to protect itself
  2. Ability to recover from the faults
  3. Ability to reconfigure itself as per the environment changes.
  4. Ability to carry out its operation at an optimal limit.
The following two factors affect the equilibrium state of an autonomic network:
  1. The internal environment: This includes factors such as CPU utilization, excessive memory and so on.
  2. The external environment: This includes factors such as safety against external attacks etc.
There are 2 major requirements of an autonomic system:
  1. Sensor channels: These sensors are required for sensing the changes.
  2. Motor channels: These channels would help the system in reacting and overcoming the effects of the changes.
The changes that are sensed by the sensor are analyzed for determining the viability limits of the variables. If the variables are detected out of this limit, then the system plans what changes it should introduce in to the system to bring them in their limit, thus bringing back the system in to its equilibrium state. 


Friday, July 6, 2012

Describe the concept of phase containment?


In this article we have focussed on an important concept namely phase containment.

Process of Phase Containment


- The process of phase containment deals with the removal of the defects and bugs present in a software system or application while it is still under its SDLC or software development life cycle. - The process of phase containment prefers the early removal of the bugs and defects. 
- It is named so because this process is all about containing faults in one specific phase of the software development life cycle before they get enough time to escape out and affect the software development in the successive phases of the software development life cycle. 

"There are two types of error. One type of the errors are the one which were introduced in the preceding phase of software development and now have accumulated in the current phase and the second types of error are the one which have been introduced in the current phase of software development itself. But the former kinds of errors are called defects and not probably errors". 

- The concept of the phase containment is promoted whenever this concept is related to the organization’s profitability and cost.
- But in order to relate the concept with the organization’s cost and profitability, the identification of the errors and defects that escaped from the previous phases of the software development life cycle and found their place in the successive phases of the software development. 
- Another thing that is required is the determination of the average costs of the defects and errors that were caught in the later phases of software development. 
- It becomes difficult to sort out errors and faults once the software product is out in the market as proven by some research. 

Methodologies to gain control of software product


- So many technologies and methodologies have been developed today to gain control over the quality of the software product.
- They are:
  1. Static analysis: This activity involves the analyzation of the program code with the purpose of formatting the errors prevailing in the software system and specific coding.
  2. Unit testing: This activity involves the developer leveraging his/ her knowledge for breaking the program code.
  3. Code reviews: This activity involves taking the steps to ensure the security of the software system or application and better accountability.
  4. Code complete criteria: This step involves providing consistent hand off to the development team.

Metrics used in Phase Containment Process


- The phase containment process makes use of the phase containment metrics.
- These phase containment metrics serve the purpose of making sure whether the developers are on the track or this process is on the track i.e., the process whether is working as desired for the company and organization or not.
- Commonly three types of metrics that are used in the process of phase containment namely:
  1. Trailing metric: The purpose of this metric is to find out the downstream impact of the process of the phase containment.
  2. Adoption metric: This phase containment metric is intended for making sure that whether or not the software systems developers are adhering the to standards of the phase containment process.
  3. Effectiveness metric: This type of phase containment matrix is used to make sure that the phase containment process is working out well or not and how the developers are maintaining it.
This process of phase containment is used to make sure that the all the aspects of the quality assurance are incorporated in to all the phases of the software development life cycle process.


Wednesday, July 4, 2012

What are common problems of test automation?


With the number of complex and advance applications increasing, the need for automation testing has also taken a strategic and critical point. In the past, the applications were quite simple and free of jargon. Assuming the level of testing to be sufficient at that time, how can we even think of testing today’s explosive software systems and applications with that level of software testing?

At this point of thinking, we come across two possibilities out of which first one is to increase the working staff and second one is to increase the level of testing. Following the first possibility seems like a bit ambiguous because the number of skilled testers is often less. So we are left with the second possibility. 

Over the years the testing has advanced and today we have what is called test automation process by virtue of which so many manual testing processes have been automated and precious human time has been saved. But this process too is not free of faults and problems. In this article we have focussed up on the common problems of the test automation.

Common Problems of Test Automation


- If you see the past cases you will come to know that the test automation did not reached the level of success at which it is today in one go itself, rather it faced many failures in the beginning. 
So many test automation efforts were born which eventually died because of lack of efficiency in them. 

The First Problem:
- With test automation process, the vendors who develop and sell test automation tools do not provide any instantaneous demonstrations of the working of their tool and the simplicity and efficiency of the tools cannot be rightly judged. 
- What we see are some of the software systems and applications that the vendor has and up on which the testing tools work quite efficiently and think those tools will work in the same way for our applications also. 
- Consequently, what happens is that after doing many projects also we do not achieve same level of success. 
But we cannot blame the testing tools for the whole thing. 
- One possibility is also that the elements present in our software system or application might not be compatible with the testing tools that we are employing to test them.
- The only way to escape such pitfalls is that we try to develop some creative solutions using which we can make our software systems and applications compatible with the testing tools.
- This is perhaps the only way to make the testing tools work with our software systems and applications.
- There are so many commercial testing tools available that are marketed as solutions for testing the software products instead of being sold as solid aids for a wide test automation frame work. 
- One thing is common among all the software testing tools which is that they all contain one or the other scripting language that allows us to discover each tool’s failings. 

The Second Problem:
- Another problem with automation testing is that the testers carrying out the automation are not having any high development experience nor do they have been given ample training to exploit these testing processes and programming environments. 
- The automation testing requires both programming skills and testing skills, but what we have are only testers i.e., the testers who are not programmers. 
- Therefore for test automation to be effective, it is necessary that they should be expert in both testing as well as in programming. 
- Due to a lack of experience, the developers often make a simple solution far too complex to be maintained and implemented. 


Tuesday, July 3, 2012

What are the advantages and disadvantages of random testing?



Ever heard about random testing? It would not be so shocking if your answer is no since this software testing methodology is rarely used. We have dedicated this entire article regarding the discussion about random testing, it advantages as well as disadvantages. 

Random testing is actually a kind of functional testing. It is used in the conditions when the time taken for writing and running tests is quite long or the problem is too complex and hence it is not possible for every test case to be executed.

Advantages and Disadvantages of Random Testing


- One of the advantage of random testing is that you can rely on the assertions in the program code.
- There is one more advantage of random testing which is that you can make inferences regarding the reliability of the application in production if you have a selection of random tests that have been generated by reference. 
- One of the big disadvantages of the random testing is that you need to know when a test fails as there is no self indication given by the system. 
- To carry out random testing you require an oracle. 
"By oracle we mean you need to throw random inputs at the software system or application code and that too from multiple possible threads. And if no error or fault occurs, then you can make sure that your software system or application is working well."
Another disadvantage of random testing is that very often you may come across some situations in which you will have two distinct implementations of the same specifications namely:
  1. The golden model and
  2. The implementation.
- In such cases the test is declared pass if and only if both of the implementations agree to a defined accuracy. 
- When you decide to carry out a random testing you first need to make sure that the tests that you are going to use are sufficiently random and they cover overall functionality of the software system or application.

-  Another disadvantage is that the random testing is not efficient than the directed testing. But the advantage here is that the time needed for generating test cases for random testing is quite less than creating a set of directed tests. 
Once you have programmed your random test generator, it can work 24 hours a day generating whole lot of new tests. 
- Often a conflict arises in the minds of the testers whether they should choose between random testing or functional testing. 
- Here, it becomes necessary to know about the number of defects a technique can dig out. 
However, the random testing proves to be useful even in the situations in which many defects are not discovered per time interval since this testing can work without any manual intervention. 
- Usually, the above mentioned testing processes i.e., random testing and functional testing are found together in combination rather than alone. 
- The usage also depends on the software system or application that is under testing. 
- The test cases used in such combinational testing have been termed as directed random tests since the tests cases can be classified on the basis of their randomness or functionality. 
- Out of all the tests, very few are 100 percent random and usually they are not so of the interesting kind. 
- Whenever you have a random test that consists of quite a big number of random elements (that are mutually constrained), then it becomes difficult to avoid thrashing which also accounts as a disadvantage. 
- Certain languages have been developed for defining the test cases for random testing like E and Vera etc. 


Friday, June 29, 2012

Does automation replace manual testing? What are the main benefits of test automation?


Test automation has proved to be quite useful in reducing the night mares of the testers regarding the testing of very complex, complicated and large software systems or applications. Almost all the manual testing processes that make use of the formalized testing processes have been automated. Although manual testing involves intense concentration and can root many of the defects and faults in the software systems and applications, it requires a hell lot of patience, efforts and time.

Can automation testing replace manual testing?


- In today’s fast paced software world, test automation has undoubtedly become one of the most strategic and critical necessity of the software development. 
- In the past years, the level of testing was considered to quite sufficient since the software systems and applications were not so dynamic.
But in today’s world we have explosive software systems and applications and we need to test them? Here the manual testing alone cannot suffice! 
- There are many classes of defects that cannot be traced without automated testing and there are several other types of errors and bugs that can be discovered using manual testing only. 
- So, we will say the automation testing cannot replace manual testing though it can be considered as a surplus to the manual testing. 

With rising demands for qualitative testing two things are possible:
(i) Either we increase the number of people involved in testing or
(ii) We increase the level of test automation.

- The rising demands for the rapidly developing web clients have made the need for test automation much critical. 
- The most unfortunate thing is that the testers are not full time to hone their software testing skills and the testers remain testers and do not become programmers.
- As a consequence of this, the simplicity of the software systems and application has been ruined and made far too complex and difficult to implement. 
- In order to get the most out of the test automation process, it should be implemented to the maximum extent it is possible. 
- If it is not used appropriately you can even face a failure in the long term development. 

To reap the full benefits of the test automation you need to keep the following things in mind:
  1. Test automation is not a sideline rather it is a full time effort.
  2. You should not confuse the test framework and test design as same entities.
  3. The frame work that is to be used for the test automation should be application independent.
  4. The frame work to be used must be maintainable and quite perpetual.
  5. The test design as well as the strategy must be independent of the frame work.

Benefits of Test Automation



  1. Test automation reduces the population of the staff required to carry out the testing.
  2. Consumes less time.
  3. Fewer efforts required.
  4. Much preferable option when it comes to the size and complexity of today’s software systems and applications.
  5. Testing is a repetitive process and this drudgery of the testers is taken by the test automation.
  6. Test automation allows the machines to complete tedious task of repetitive testing while in the meantime the testers can take care of the other chores.
  7. The testing costs are reduced.
  8. Using test automation the load testing and stress testing can be performed very effectively.


Wednesday, June 27, 2012

What is the key difference between preventative and reactive approaches to testing?


There are two main approaches to testing namely:
  1. Preventative approach to testing and
  2. Reactive approach to testing

What is Preventive Approach to testing?


- Preventative approach to testing is analytical in nature but reactive is rather heuristic.
- The designing of the preventative tests takes place after the production of the software. 
- On the other hand, the reactive tests are designed quite early i.e., in response to the feed back or review comments given by the customers.
- But both of these tests are designed as early as possible. 
- On an overall basis, preventative tests are created earlier than the reactive tests. 
- The preventative testing approach is based up on the philosophy that the quality of the software system or application under testing can actually be improved by testing if it is done early enough during the software development life cycle.
- However, one specification is to be followed for implement this approach to testing which is that test cases should be created for the validation of the requirements before the code is written.
- Preventive approach is followed because it can be applied to all the phases of the project and not just to the code. It also reduces cost of correcting the underlying faults. 

What is Reactive approach to testing?


- Reactive approach to testing is considered to be a performance testing activity in the field of performance management. 
- The developers often do not think about the performance of the software system or application that they are developing during the early stages of the SDLC or software development life cycle. 
- More often, the performance quotient is neglected till the testing of the system is complete. 
This is known fact that the performance of a software system or application revolves around its architecture. 
- Designing an effective architecture is a high cost activity and in some of the cases the whole system is trashed because of the presence of the huge deviations in performance factors. 
However, waiting for problems related to the performance to surface and then dealing with them is not always the best option. 
- So, performance testing is considered to be a reactive approach to testing since the system does not gathers much importance during the early stages of the life cycle. 
- Reactive approach to testing is more of a kind of a “fix it later” approach which is less effective than the preventative approach.
- Even quality control is a reactive approach to testing. 
- Every step of the production development cycle involves quality control. 
- It is not that the developers are simply sitting idle and noting down all the areas where potential issues are suspected after the final outcome.

Importance of both the approaches


- Actually preventative and reactive approaches are considered to be two “test strategies” which are used to define the objectives of the software’s testing and how they are to be achieved. 
- These two approaches also act as a determining factor in the cost and amount of effort that is to be invested in the testing of the software system or application. 
- In preventative approach the testers are involved right from the beginning of the project. 
- The specification and preparation i.e., test planning and test design is also carried out as soon as possible. 
- Preventative approach involves activities like:
  1. Reviews
  2. Static analysis
- Contrary to the preventative approach, in reactive approach testers are involved late in a project.
- The test planning and designing is started after the development of the project has ended. 
Reactive approach involves techniques like exploratory testing.



Wednesday, June 20, 2012

What are different characteristics of smoke testing?


Smoke testing is considered to be one of the preliminary software testing techniques to further testing that are intended to reveal simple failures that are simple enough to be a cause of a software project to be rejected. 
In the term “smoke testing”, smoke is used as a metaphor. All the test cases that are known to provide coverage to most important functionality of a component of the software system or application are selected and grouped under a set and later are run. This is basically done to ascertain that the most crucial functions and features of the software system and application are working as desired or not! 
One can understand smoke testing better by having a look at the below mentioned questions:
  1. Does the program run?
  2. Does clicking the start button do anything?
  3. Does the program run?

Goals of Smoke Testing


- The primary goal of the smoke testing is to determine if the problem or fault in the software system or application is so worse that any further testing on that software system or application will be a waste. 
- To put it simply, the smoke testing can be considered to be a cheap and less time consuming way to broadly cover the features and functionalities of the software product in a limited period of time. 
- By carrying out smoke testing, you can easily find out if any of your software system’s or application’s key feature or functionality is broken at some point and so your development team would not spend further time on it fixing or recreating it. 

People often confuse between the two similar terms i.e., smoke testing and build verification test. No doubt both are one and same the only thing when the smoke testing is performed on a build it called build verification test.


When should smoke testing be performed?


- One of the best practices of software testing is that the smoke testing should be performed everyday without fail. 
- Furthermore, the smoke testing can also be performed by the testers before accepting a module or build for further testing.
- Some testing has been listed as the most cost effective method for the identification and fixation of the defects in a software system or application after the code reviews by the software giant Microsoft Corporation. 
- The smoke testing is deployed as a process for the validation of the changes that have to be in the code before they are passed on to the source control in the Microsoft Corporation. 
- The smoke testing has to be carried out either automatically or manually that is purely the choice of the tester.
- When the smoke testing is carried out manually, the process goes on normally but when the automated tools are used, the tests that have to be carried out are initiated during the same process through which the build is generated. 

Smoke tests have 2 types as mentioned below:
  1. Unit tests: Smoke tests under this category are known to exercise the sub routines, object methods and individual functions and so on.
  2. Functional tests: These types of smoke tests are known to exercise the complete program along with various inputs.
Both the above mentioned type of smoke testing tools together make up a third party product that falls out of the compiler suite. 
A scripted series of program inputs form a function test and some may also have an automated mechanism. On the other hand the unit tests may be formed out of the separate functions that lie within the code. They can even be a driver layer that might be linked to the code. The smoke testing in software can be compared to the smoke testing in hard wares. 


Monday, June 18, 2012

How is data understood through reverse engineering?


Reverse engineering forms an internal part of the whole software re- engineering process. To implement the reverse engineering properly in the re- engineering process one needs to understand the data through the reverse engineering.
In this article we have taken up the same topic i.e., “how is data understood through reverse engineering?” 
- When the reverse engineering is carried out, the main objective is always to recover the designs and specifications of the software system or application.
- With reverse engineering, the development is only understood and no changes are made to the software system or application. 
- For the reverse engineering process, the source of the software system or application is fed as the input. 
- If you go through the history of the reverse engineering, you will find some cases where the executable code is given as input for the reverse engineering process. 
- The reverse engineering though being in a great contrast with the re- engineering forms an internal part of it. 
- Reverse engineering serves the purpose of recovering system design and specifications in the software re- engineering process model. 
- These recovered designs and specifications are used by the engineers to understand the software system or application before they start re- organizing the whole structure.
However, there have been cases in which the reverse engineering is not always followed by the re- engineering process. 
- There are three stages in reverse engineering process.
 1. System to be re- engineered is subjected to automated analysis.
 2. Then it is manually annotated.
 3. With the system information obtained, a whole set of new documentation is generated containing:
(a)  Program structure diagrams
(b)  Data structure diagrams and
(c)   Traceability matrices.

Activities in Reverse engineering


In reverse engineering the data is understood by the 3 basic activities that involved intense understanding efforts:
1. Understanding process: In order to understand the procedural abstractions as well as functionality analyze the source code on the level:
(a)  System
(b)  Program
(c)   Component
(d)  Statement and
(e)  Pattern

2. Understanding data: It analyzes the internal data structures and data base structure.
3. User interfaces: It analyzes the basic actions processed by the interface, system’s behavioral response to those actions and equivalency of the interfaces.

More about reverse engineering....
- The beginning of the reverse engineering process is marked with an analysis phase in which the analyzation is carried out with the help of automated tools in order to discover the structure of the software system or application. 
- But this stage itself does not suffice the purpose of recovering the whole structure. 
Engineers then have to work on the program code and model and add the recovered information to this.
- The recovered information is maintained in form of a directed graph. 
- The engineers always make it a point to link the directed graph to the source code. 
- The directed graphs are compared with the code with the help of the information browser stores. 
- This graph helps in generating trace ability matrices and the data structures. 
- Tools that are used make it easy to navigate through the code. 
- After the complete generation of the design documentation, the information store is supplied with the additional information as a measure to re create the specifications systems. 


Facebook activity