Subscribe by Email


Sunday, September 9, 2007

What is the Rational Unified Process (RUP) ?

To simplify and make the strategy of iterative software development, the rational software corporation came up with an iterative software development process framework, now popularly known as the rational unified process or RUP. The rational software corporation has been a division of IBM since 2003. RUP has been developed as a process framework that is adaptable rather than being a rigid perspective process. One of the advantages of this process is that development organizations have the freedom to tailor it for their needs. The required elements of the process can be selected by the software projects teams as they think is appropriate. 
The product comes with many sample artifacts and a number of detailed descriptions for a number of activities that are supported by the RUP. It has been included as a part of the IBM’s RMC (rational method composer) allowing the easy customization of the software development process. The RUP lays down six best practices which evolved by the combination of the experience of various companies. These six best practices are:
Ø  Iterative development driven by risk. Risk is the primary iteration driver.
Ø  Management of requirements
Ø  Employment of component based architecture
Ø  Visually modeling the software.
Ø  Verifying the quality of the product continuously.
Ø  Controlling the changes
All these practices are employed by rational corporation for the development of their products. They are even used by its members for helping the customers in making improvements to the predictability and the quality of the development efforts. The RUP can be tailored for guiding the software development. It comes with all the tools that are used for automating the application of the RUP. It also offers services for accelerating the process’ and its tools’ adoption. All these three things form a strategic tripod for the implementation of RUP. The foundation of the process is based up on certain building blocks and elements which describe the thing that has to be produced and how it is to be produced and what are the requirements. Following are the three main building blocks:
Ø  Roles: This defines the skills required and who should take what responsibilities and has which competencies.
Ø  Work products: This represents the result of an activity. The result includes all the models and the documents produced during the process.
Ø  Tasks: The work assigned to the roles is described by this element. The work should be such that it should provide a meaningful result.
Many iterations might be carried out. In each iteration the tasks are divided in to a total of 9 categories:

Ø  Six disciplines of engineering:
-          Business modeling
-          Analysis and design
-          Requirements
-          Test
-          Implementation
-          deployment
Ø  Rest are supporting disciplines:
-          environment
-          configuration and change management
-          project management

There is also a tool that you can use for configuring, authoring and viewing the processes. Processes can even be published. Certification for RUP i.e., the IBM certified solution designer – rational unified process 7.0 was released in the year of 2007. The earlier version was the IBM rational certified specialist – rational unified process. The new version of the exam focuses on both the RUP content and the process structure elements. For passing this certification examination, the test you have to take is the test 839: rational unified process v7.0 in which you are given 52 questions to be done in 75 minutes.


Monday, September 3, 2007

Factors in making a project successful - some quick factors

Unpredictable, late, over budget projects are not something very uncommon. In some cases the project even fails before it delivers even one program. In this article we discuss about some success factors that are crucial for a successful project. Even though we have gone through 4 programming languages generations, 3 development paradigms, we are still not capable of transforming our ideas in to successful software. The number of software project failures has risen rapidly since recent years. No magic goes in to successful management of software development. But there are some factors that make it better:  Complexity management: There are several characteristics of software that make the management of the development process very complicated and thus difficult. First, the systems that are based up on software are quite complex. Secondly computing itself is a complex thing. The problem of mastering it is the basic problem. The software developers have to work out such complex problems and so no doubt they are very intelligent people and complex individuals in themselves. This complicates their management also. On top of that if the developers are trying to target user requirements, all the management issues mix up. One study has shown an improvement in the completion rate of the software if the complexity is lower. Companies are now adopting the projects that are small and thus easy to manage rather than taking up large projects, or breaking up larger ones into smaller pieces.  Starting on the right side: It is difficult to develop successful software when the development efforts are not proper just like you cannot grow strong plants in soil that is weak. Below we list some failure symptoms out which some are predetermined before the development starts: - A lack of understanding of the users’ needs. - Poorly defined scope of the project. - Changes in the technology chosen for development. - Changes in the business needs. - Setting unrealistic deadlines. - Resistant users - Lack of sponsorship. - Lack of appropriate skills - Ignorance towards best practices. For avoiding this we should have objectives that can be really achieved and the expectations that can really come true. Your team should consist of people with proper skills required for the job. Adequate resources must be given to the team to meet their requirements.  Momentum maintenance: Now that you have a strong team, a good working environment, good resources etc. you can gain the momentum. Next you need to keep increasing this momentum. Having a high momentum is quite easy but taking it further from here is very difficult. It keeps changing with the course of development. Your focus should be keeping the attrition low, monitoring quality early and managing the product more.  Progress tracking: The nature of the software is intangible i.e., it cannot be physically touched or measured. If you don’t know what mistakes you are making while doing the project, it is quite possible that you’ll keep repeating them.  Smart decision making: Difference between the project failures and successful projects comes from making smart decisions. It is often not difficult to analyze whether a decision is good or bad before you implement it. Bad decisions are often made while selecting what technologies are to be used. It may happen that you might not be able to finish your project and the platform your project supports goes away. Before picking up a technology, its analysis must be done and you should if there is a market for it.  Post – mortem analysis: Successful companies analyze the project to learn from their mistakes. If this not done, the same mistakes will be repeated again and again.


Thursday, August 23, 2007

Some definitions of regression testing and what it means ..

What is regression testing?

Regression is a type of software testing that is used for discovering new errors and bugs in the software system. In terminology of regression testing these bugs are called regressions. Regression testing is carried out for the already existing areas of the system whether functional or non – functional. This testing is carried out after any change has been made in the configurations or patches. The basic purpose of regression testing is ensuring that any change or modification made to the software does not affect its functionality or introduce some other new errors. It also checks whether a change in one part of the system had any effect on the other parts of the system. Some common methods of carrying out regressing testing are:

- Re – executing the tests that have already been executed.
- Checking if any change has occurred in the working of the software.
- Checking if the errors corrected earlier have appeared.

Regression testing as such is very time consuming since you have to test everything again and again, but can be performed effectively by selecting a certain number of tests. The number of tests selected is sufficient enough to cover the whole unit in which the change is made. According to a research, it has been found that making a change in one part of the software often introduces errors in other parts. In some of the cases the same errors reoccur because the fix that was made earlier to prevent them gets lost. This happens probably because of errors humans make while revising the code i.e., because of the poor revision practices. Often the fix that we apply for a problem is fragile that it is easily breakable and can get lost. Or sometimes this fix for one part of the software can cause an error in another part.
In some other cases it might happen that during redesigning, the same mistakes might be made that were made during the designing of the original software. Therefore mostly regression testing is considered to be a good practice. It helps in locating and fixing the bug, recording the tests which led to the discovery of the bugs, repeated execution of the tests at regular intervals after some modification has been done in the program. You might be thinking that this can be done even by following the manual testing procedure by means of some programming techniques. But this is better carried out with automated testing as it takes less effort and time when compared to the manual testing.
An automated testing suite consists of various software tools that allow for the automatic execution of the test cases and generate reports. The programmers might set up such automated testing systems for automatically doing regression testing at regular intervals of time. This way the regression tests can be run either after every compilation, once a week or even every night. There are various tools available for automated regression testing such as follows:
- TinderBox
- BuildBot
- Hudson
- Jenkins
- Bamboo
- TeamCity

Regression testing is closely associated with extreme programming as the former is an inseparable part of the latter. Here at each phase of the software development cycle, the entire software package is put through repeatable, extensive automated testing. The software quality assurance team is responsible for performing the regression testing after the development is over. But fixing the bugs at this stage comes very expensive. This potential problem is fixed at an earlier stage by means of unit testing. The test cases written by the programmers are for the verification of the intended outcomes i.e., they are either unit tests or functional tests. 


Tuesday, August 21, 2007

Advantages and Disadvantages of White Box Testing

Advantages of White box testing are:
i) As the knowledge of internal coding structure is prerequisite, it becomes very easy to find out which type of input/data can help in testing the application effectively.
ii) The other advantage of white box testing is that it helps in optimizing the code
iii) It helps in removing the extra lines of code, which can bring in hidden defects.
iv) Forces test developer to reason carefully about implementation


White-box testing is an important method for the early detection of errors during software development. In this process test case
generation plays a crucial role, defining appropriate and error-sensitive test data. White-box testing strategies include designing tests such that every line of source code is executed at least once, or requiring every function to be individually tested. Code coverage is a significant benefit provided by white box testing. It is much easier to determine if you've looked at all functions, libraries, etc, when you know what they all are.


Disadvantages of white box testing are:
i) As knowledge of code and internal structure is a prerequisite, a skilled tester is needed to carry out this type of testing, which increases the cost.
ii) And it is nearly impossible to look into every bit of code to find out hidden errors, which may create problems, resulting in failure of the application.
iii) Not looking at the code in a runtime environment. That's important for a number of reasons. Exploitation of a vulnerability is dependent upon all aspects of the platform being targeted and source code is just of those components. The underlying operating system, the backend database being used, third party security tools, dependent libraries, etc. must all be taken into account when determining exploitability. A source code review is not able to take these factors into account.
iv) Very few white-box tests can be done without modifying the program, changing values to force different execution paths, or to generate a full range of inputs to test a particular function.
v) Miss cases omitted in the code


What is White Box Testing ?

White box testing strategy deals with the internal logic and structure of the code. White box testing is also called as glass, structural, open box or clear box testing. The tests written based on the white box testing strategy incorporate coverage of the code written, branches, paths, statements and internal logic of the code etc. White box testing is testing from the inside--tests that go in and test the actual program structure.
In order to implement white box testing, the tester has to deal with the code and hence is needed to possess knowledge of coding and logic i.e. internal working of the code. White box testing also needs the tester to look into the code and find out which unit/statement/chunk of the code is malfunctioning. The tester chooses test case inputs to exercise paths through the code and determines the appropriate outputs.
While white box testing is applicable at the unit, integration and system levels of the software testing process, it's typically applied to the unit. So while it normally tests paths within a unit, it can also test paths between units during integration, and between subsystems during a system level test. Though this method of test design can uncover an overwhelming number of test cases, it might not detect unimplemented parts of the specification or missing requirements. But you can be sure that all paths through the test object are executed.
For a further definition, White box testing is a test case design method that uses the control structure of the procedural design to derive test cases. Test cases can be derived that
1. guarantee that all independent paths within a module have been exercised at least once,
2. exercise all logical decisions on their true and false sides,
3. execute all loops at their boundaries and within their operational bounds, and
4. exercise internal data structures to ensure their validity.

Typical white box test design techniques include:
* Control flow testing
* Data flow testing


Orthogonal Array Testing Strategy (OATS)

The Orthogonal Array Testing Strategy (OATS) is a systematic, statistical way of testing pair-wise interactions. It provides representative (uniformly distributed) coverage of of all variable pair combinations. This makes the technique particularly useful for integration testing of software components (especially in OO systems where multiple subclasses can be substituted as the server for a client). It is also quite useful for testing combinations of configurable options (such as a web page that
lets the user choose the font style, background color, and page layout).
Orthogonal Array Testing is a statistical testing technique implemented by Taguchi. This method is extremely valuable for testing complex applications and e-comm products. The e-comm world presents interesting challenges for test case design and testing coverage. The black box testing technique will not adequately provide sufficient testing coverage. The underlining infrastructure connections between servers and legacy systems will not be understood by the black box testing team. A gray box testing team will have the necessary knowledge and combined with the power of statistical testing, an elaborate testing net can be set-up and implemented.

The theory:
Orthogonal Array Testing (OAT) can be used to reduce the number of combinations and provide maximum coverage with a minimum number of test cases. OAT is an array of values in which each column represents a variable - factor that can take a certain set of values called levels. Each row represents a test case. In OAT, the factors are combined pair-wise rather than representing all possible combinations of factors and levels.
Orthogonal arrays are two dimensional arrays of numbers which possess the interesting quality that by choosing any two columns in the array you receive an even distribution of all the pair-wise combinations of values in the array.

Why use this technique?
Test case selection poses an interesting dilemma for the software professional. Almost everyone has heard that you can't test quality into a product, that testing can only show the existence of defects and never their absence, and that exhaustive testing quickly becomes impossible -- even in small systems.
However, testing is necessary. Being intelligent about which test cases you choose can make all the difference between (a) endlessly executing tests that just aren't likely to find bugs and don't increase your confidence in the system and (b) executing a concise, well-defined set of tests that are likely to uncover most (not all) of the bugs and that give you a great deal more comfort in the quality of your
software.

The basic fault model that lies beneath this technique is:
1. Interactions and integrations are a major source of defects.
2. Most of these defects are not a result of complex interactions such as "When the background is blue and the font is Arial and the layout has menus on the right and the images are large and it's a Thursday then the tables don't line up properly."
3. Most of these defects arise from simple pair-wise interactions such as "When the font is Arial and the menus are on the right the tables don't line up properly."
4. With so many possible combinations of components or settings, it is easy to miss one.
5. Randomly selecting values to create all of the pair-wise combinations is bound to create inefficient test sets and test sets with random, senseless distribution of values.

OATS provides a means to select a test set that:
1. Guarantees testing the pair-wise combinations of all the selected variables.
2. Creates an efficient and concise test set with many fewer test cases than testing all combinations of all variables.
3. Creates a test set that has an even distribution of all pair-wise combinations.
4. Exercises some of the complex combinations of all the variables.
5. Is simpler to generate and less error prone than test sets created by hand.


Sunday, August 19, 2007

Black Box testing techniques

Black Box Testing is testing technique having no knowledge of the internal functionality/structure of the system. This testing technique treats the system as black box or closed box. Tester will only know the formal inputs and projected results. Tester does not know how the program actually arrives at those results. Hence tester tests the system based on the functional specifications given to him. That is the reason black box testing is also considered as functional testing. This testing technique is also called as behavioral testing or opaque box testing or simply closed box testing. Although black box testing is a behavioral testing, Behavioral test design is slightly different from black-box test design because the use of internal knowledge is not illegal in behavioral testing

Black Box Testing has many types of techniques like:

1. Decision Table

Decision tables are a precise yet compact way to model complicated logic. Decision tables, like if-then-else and switch-case statements, associate conditions with actions to perform. But, unlike the control structures found in traditional programming languages, decision tables can associate many independent conditions with several actions in an elegant way.
Decision tables make it easy to observe that all possible conditions are accounted for. In the example above, every possible combination of the three conditions is given. In decision tables, when conditions are omitted, it is obvious even at a glance that logic is missing. Compare this to traditional control structures, where it is not easy to notice gaps in program logic with a mere glance --- sometimes it is difficult to follow which conditions correspond to which actions!
Just as decision tables make it easy to audit control logic, decision tables demand that a programmer think of all possible conditions. With traditional control structures, it is easy to forget about corner cases, especially when the else statement is optional. Since logic is so important to programming, decision tables are an excellent tool for designing control logic.

2. Equivalence Partitioning

The testing theory related to equivalence partitioning says that only one test case of each partition is needed to evaluate the behaviour of the program for the related partition. In other words it is sufficient to select one test case out of each partition to check the behaviour of the program. To use more or even all test cases of a partition will not find new faults in the program. The values within one partition are considered to be "equivalent". Thus the number of test cases can be reduced considerably.
Types of Equivalence Classes
* Continuous classes run from one point to another, with no clear separations of values. An example is a temperature range.
* Discrete classes have clear separation of values. Discrete classes are sets, or enumerations.
* Boolean classes are either true or false. Boolean classes only have two values, either true or false, on or off, yes or no. An example is whether a checkbox is checked or unchecked.
Equivalence partitioning is no stand alone method to determine test cases. It has to be supplemented by boundary value analysis. Having determined the partitions of possible inputs the method of boundary value analysis has to be applied to select the most effective test cases out of these partitions.

3. Boundary Value Analysis

Boundary value analysis is a software testing design technique to determine test cases covering off-by-one errors. The boundaries of software component input ranges are areas of frequent problems. Testing experience has shown that especially the boundaries of input ranges to a software component are liable to defects.
To set up boundary value analysis test cases you first have to determine which boundaries you have at the interface of a software component. This has to be done by applying the equivalence partitioning technique. Boundary value analysis and equivalence partitioning are inevitably linked together.
The tendency is to relate boundary value analysis more to the so called black box testing which is strictly checking a software component at its interfaces, without consideration of internal structures of the software. But looking closer at the subject, there are cases where it applies also to white box testing.
After determining the necessary test cases with equivalence partitioning and subsequent boundary value analysis, it is necessary to define the combinations of the test cases when there are multiple inputs to a software component.

4. Use Case Method

A use case is a technique used in software and systems engineering to capture the functional requirements of a system. Use cases describe the interaction between a primary actor—the initiator of the interaction—and the system itself, represented as a sequence of simple steps. Actors are something or someone which exist outside the system under study, and who (or which) take part in a sequence of activities in a dialogue with the system, to achieve some goal: they may be end users, other systems, or hardware devices. Each use case is a complete series of events, from the point of view of the actor.
Each use case provides one or more scenarios that convey how the actor will interact with the system to achieve a specific business goal or function. Use cases typically avoid technical jargon, preferring instead the language of the end user or domain expert. Use cases are often co-authored by systems analysts and end users. They are separate and distinct from UML use case diagrams, which allow one to abstractly work with groups of use cases.
A use case should:
* describe how the system shall be used by an actor to achieve a particular goal.
* have no implementation-specific language.
* be at the appropriate level of detail.
* Not include detail regarding user interfaces and screens. This is done in user-interface design.

5. State Transition Tables

In automata theory and sequential logic, a state transition table is a table showing what state (or states in the case of a nondeterministic finite automaton) a finite semiautomaton or finite state machine will move to, based on the current state and other inputs. A state table is essentially a truth table in which some of the inputs are the current state, and the outputs include the next state, along with other outputs. A state table is one of many ways to specify a state machine, other ways being a state diagram, and a characteristic equation.

6. Cross-functional testing
7. Pairwise testing


Facebook activity