Subscribe by Email


Showing posts with label Bugs. Show all posts
Showing posts with label Bugs. Show all posts

Thursday, May 1, 2025

Determining the urgency of defects

Determining the Urgency of Resolving Defects in Software Development

Why Defect Resolution is Critical

Software defects can impact user experience, security, and overall functionality. Knowing when to prioritize defect resolution is essential for maintaining software quality and reliability.

Factors That Determine Defect Urgency

Not all defects require immediate attention. Some can wait for a later release, while others demand urgent resolution. Here are the primary factors that influence defect urgency:

1. Severity of the Defect

  • Critical defects: Issues that cause system crashes, data loss, or security breaches must be fixed immediately.
  • Major defects: Problems that significantly affect functionality but have workarounds can be scheduled for the next release.
  • Minor defects: Cosmetic issues or minor inconveniences can be resolved in future iterations.

2. Impact on Users

The number of users affected by a defect is a key consideration. If a bug impacts a large user base or a critical feature, fixing it should be a top priority.

3. Business Implications

Some defects may lead to revenue loss or compliance violations. In such cases, resolving the issue quickly is crucial to maintaining business integrity.

4. Frequency of Occurrence

Bugs that occur frequently require more immediate attention compared to rare issues. High-frequency defects can indicate deeper systemic problems that need to be addressed.

5. Security Risks

Security vulnerabilities should always be addressed urgently to prevent potential data breaches or cyber-attacks.

6. Dependencies on Other Features

If a defect affects multiple components or prevents other features from functioning, resolving it becomes more urgent.

How to Prioritize Defect Resolution

Using a structured approach to defect prioritization ensures that resources are allocated efficiently. Consider the following strategies:

1. Categorization Using a Defect Matrix

Developers can use a defect matrix to classify issues based on severity and impact, helping teams prioritize effectively.

2. Utilizing Agile Methodologies

Frameworks like Scrum and Kanban help teams assess defects dynamically and allocate sprint resources accordingly.

3. Stakeholder Involvement

Engaging product owners and end-users ensures that defect prioritization aligns with business and customer needs.

Final Thoughts

Determining defect urgency is essential for maintaining software quality, user satisfaction, and security. By leveraging structured prioritization methods, teams can optimize resources and enhance software reliability effectively.


Thursday, July 25, 2013

Defect Management: Dividing overall defect reports into separate functional areas - Part 3

This is a series of posts where I look at the creation of a defect status report, in a way that it provides enhanced information to the team and its managers and helps them to make decisions. In the previous post (Count of new incoming defects), I talked about adding more parameters to the defect report that help in information that can let the team know whether the number of defects getting added on a daily basis will enable the team to reach their defect milestones by the required time and date. This kind of data, coming in on a regular daily cycle, helps the team to decide whether their current defect pattern is on the desired path, above it, or below it, and accordingly make the required decisions.
This post will see more details added to the defect report that provides a lot of useful information to the team. The last post talked about the flow of incoming defects that move to the ToFix state. However, there is another type of information that is relevant. Defects that are present in the system are not only owned by the development team. In addition to the defects in core code, there may be defects that are present in the components used in the application. These are defects that are not attributable to the development team, but to the vendors or other groups that provide the components. A number of teams that I know typically track these defects in the defect database, but distinct from the defects with the core team.
The nature of defects that are against external components is different from those in the core code. Even though to the customer it does not matter whether the defect is within the core code or in an external component, the amount of effort required in terms of coordination and communication is entirely different from the other defects that are with the core developmental team. If a defect is with a component that is not owned by the team, the timeline for fixing of the defect may take longer and need persuasion; or there may be a lot of back and forth between the tester and the external component team to study the situation in which the defect occurs (which also includes sending the environment in which the defect occurred to the external vendor - and this has its own cost and restrictions, since if the team is working on a new version of the software, there would NDA issues and IP issues related to sending details of the environment to the external component team), and so on. Another concern could be that that even if such a defect is resolved, it might need a new version of the component, which has its own extra cost about testing the component on its own to check whether it is fine or there are other issues with the same.
As a result, it needs to separate out the incoming defects about whether they belong to the core team or whether they are attributable to people outside the team; and if the proportion of such defects that are outside the core team is increasing, it is a matter of concern to the team, since resolving such defects typically takes much more effort and time.


Wednesday, July 24, 2013

Defect Management: Dividing overall defect reports into separate functional areas - Part 2

Part 1 (Dividing defect reports into ToFix, ToTest and ToDefer counts) of this post talked about the importance of Defect Management in a software project, and then got into some details about the regular sending out of a report on Total defects, with these defects having been broken down into ToFix, ToTest and ToDefer stats, maintained on a graph over a period of time with daily updates, so that the team and the managers can figure out whether the team is on progress to resolve these bugs.
This post continues on this line, talking about additional items that can be added to this defect chart and metrics to provide more information to the team and determine whether it is on the right track or not. Are all these metrics important ? There is a lot of comments about not over-burdening people with too many statistics, and there are more comments about letting people do their work rather than sending so many statistics that they stop looking at these stats. However, it is also true that the role of the team managers is to look at the broader situation in terms of project status, and the defect management is an important part of this. It is true that the team members should not be burdened with these figures, but for the team managers, it is critical to look at such data.
So, the team looks at the ongoing figures for defects in terms of ToFix over a period of days and tries to determine whether the team is on the right track or not. So what else should you be capturing ? Another metric that can now be added to such a report is about the number of defects that are still incoming. There are primarily 2 ways in which defects can be added to the count of developers:
- New defects that are logged against the development team and which add to their count and to the overall ToFix count
- Defects that have been rejected by the testing team after they have been marked fixed by the developer but there is a problem in the fix (this can vary a lot among different teams and even in a team - a developer could be fixing defects with hardly any returns and there could be another developer who is under pressure and many of whose defects are returned because of some problems). So, whether to determine this kind of statistic and calculate metrics for such a case determines of whether the team is seeing such kind of returns for the defect management.
Once you have these kind of defect counts, it helps in determining the current status of defects and see whether the team is on the right track. So, you have a total count of open ToFix defects, and there is a decline in such a count needed to hit the deadlines. However, for getting to such a deadline, you need the number of incoming defects to be also fitting into this strategy. If there are a large number of incoming defects, then the team will not be easily able to determine whether their ToFix defect count is decreasing by the amount they want to hit their targets, and this then needs a change to the strategy to determine whether the team will get there or not.


Tuesday, July 23, 2013

Defect Management: Dividing overall defect reports into separate functional areas - Part 1

Handling defects is one of the major efforts that plays an integral role in handling a project schedule and making it successful. I have known multiple teams where the team did not have a good running estimate of their defect count and the defect estimation over the remaining period of time left in the schedule; as a result, when the team was closer to the final stages of the schedule, they found that they had too many defects that made the remaining part of the schedule very tight - which meant that if they were to do an accurate reckoning of their status, they would need to either defer more defects and maybe end up with a product that is lower in quality; or the product would need to extend their timeline / schedule, which has a huge implication  for the team and many other teams that are involved in the release schedule of the product.
How do you avoid this ? The first paragraph of this post points out a huge problem, but the answer cannot be handled in a single post; it can be handled by a single cheesy phrase but which does not provide any solutions - "You need to do Defect Management". Now, let us get down to the meat of this post - this post just takes a specific aspect of defect management - sending a split of the defect counts as per the different areas. This in turn provides a better view of the defect picture to the team and helps in the process of overall defect management.
We wanted to have a system whereby we could track the counts for each of the separate functional areas and yet have the management team have access to these data on an ongoing basis. These also helped the separate functional teams do a targeting of the counts of the defects of their respective functional areas and work towards reducing this count. So, we took the overall data for defects for the product (open defects) and split these into the following areas:
Open defects:
ToFix (these are primarily defects owned by the development team, although there could be defects that are carried by other team - such as where there are defects with components supplied by external teams)
ToTest (these are primarily defects owned by the testing team, although since anybody can file a defect within the team, there may be people other than the testing team who own a defect)
ToDefer (the exact terminology of these defects can be different across organizations; but these are typically defects that are with a defect review committee for evaluation. These can be significant defects that need evaluation by the committee before they are to be fixed, or these can be defects that are not worthy of fixing but the team wants the committee to take a final call, and so on).
What is the purpose of sending out separate stats on a regular basis ? These data, when plotted on a regular graph over a period of time provides a large amount of information. The team and the managers, although they are in the thick of things, sometimes need to see such kind of aggregate information to take a good decision. For example, if the team is in the second part of the cycle and close to the timeline, and yet the ToFix graphs do not show a declining trend, then this is something to worry about. When such a stage happens, I have seen the development team manager doing a major discussion with the team to figure out how to reduce these counts and figure out what is happening. In extreme cases, I have seen the team actually take a hard look at these defect counts and then make a recommendation for extending the schedule (which is not a simple step to take).


Tuesday, April 30, 2013

Defect handling - Planning how to find bug bugs earlier in the cycle

In every cycle of software development, whether this be product development or working on a project, one of the key items is about finding defects. There can be small defects or big defects, but the plan is always to try to fix these defects. Smaller defects can sometimes be more easy to handle, since they have a lesser impact on the customers, and in several cases, it would even be easy to defer some of these bugs if they are low severity and there is a squeeze on time and resources. However, larger bugs, those that have an impact on functionality or workflows are harder, and so are the ones that are complex or need more time to fix. These are the sort of bugs / defects that a team would find hard to defer or leave alone, and they can be critical to fix.
One of the biggest problems with such defects is the time period in which many of these defects are found. Defects which have a high impact are typically found in the latter part of the cycle, primarily because a lot of the functionality comes to be ready in the latter half of the cycle. If there are a number of features in development, the earlier parts of the cycle see the development of these features. As time progresses, these features start getting into a good shape and the integration points of these features start getting worked on. This is the time when the workflows of the product start coming into shape, and that is when the testing team will be able to check the integration of these features into one another.
The testing effort at this stage is able to detect workflow problems, as well as design flaws where the type and amount of information flowing from one feature to the other may have issues, and not happening as per design. These defects take more time to analyse, and may also need teams from the different feature areas to collaborate to figure out the defects. As a result, the time involved to fix these defects is more, and this gets problematic when the number of such defects found is more than expected, which means that the time that the team has to fix issues may be less. Further, the later that such defects are found, the greater the risk that the fix for such defects can cause other problems; because of such risks, defects found later are at a greater risk of not being fixed.
What do you do ? Some of the issues may be more problematic to solve. Feature development work focuses on the specific features in the earlier part of the cycle, with the focus shifting to integration only later in the cycle, so changing the timelines for this may be more difficult. But, it is possible to do studies to estimate the amount of bugs that will be found (far easier of this is just a new version of the product being developed and there is historic data), and then plan for more time for the same. At the same time, a number of problems typically are found out if there is inadequate time spent during the design and architecture phase and teams should ensure that they are spending the right amount of time on these activities. Further, as the feature is being developed, workflow or integration related flows should be tested, even if integration has not been completed. An example of this can be done is to prepare a software harness which will allow the input and output of data from the various features even if the integration has not been done. Doing this ensures that a number of the defects that are found post integration can be found earlier in the cycle, and this saved a lot of time and effort.


Thursday, February 14, 2013

Explain Telerik TeamPulse?


About Telerik TeamPulse

- Telerik TeamPulse has been developed by Telerik as an agile project management tool in the year of 2010. 
- One of the characteristic features of the TeamPulse is that it can be integrated as well as hosted as local Microsoft team foundation server 2008, 10 and 12 services. 
- However, it cannot be integrated with the Microsoft visual studio.
- This Telerik product is available under commercial license. 
- The features of TeamPulse have been mentioned below:
  1. Bug tracking
  2. Integration with the telerik’s web UI test studio
  3. Time tracking
  4. Backlog management
  5. E – mail notifications
  6. Cross – project dashboard known as xView and developed with html5.
  7. Task board
  8. Storyboard with WP limits
  9. TeamPulse can be integrated with Microsoft TFS (team foundation server) 2008, 2010 and 2012.
  10. Requirements manager
  11. Best practices analyzer
- The extension provided with the Telerik TeamPulse is the TeamPulse ideas and feedback portal which is based on html and is compatible with html5.
- Since TeamPulse is commercial software, it is not available for a hosted solution but it is to be used only on premise. 
- When you add TeamPulse to TFS the planning, tracking and collaboration improves automatically.
- With the real time project intelligence of Telerik TeamPulse you can improve decision making power. 
- It provides you with up-to-date views of the status of the project. 
- By using TeamPulse, you bridge the boundaries between the team members and their geographical location i.e., the communication is improved. 
- This tool has been exclusively designed for scrum and kanban teams i.e., any of these either kanban or scrum or scrumban can be used.
- It has been designed to reduce the delivery time, eliminate the waste and improve the work flow. 
- It also provides you a convenient way for collecting and managing the customer feedback. 
- It lets you create products that your customer actually needs. 
- TeamPulse lets you manage project well with the following:
  1. Work burn down
  2. Velocity
  3. Cycle time
  4. Iteration delta
  5. Agile best practices and a number of other reports. 

- Telerik TeamPulse favors most of the agile projects.
- It lets one plan, manage and monitor the results thus improving the overall process. 
TeamPulse has got a rich interface with in–context guidance making the integration with TFS faster. 
- Its other features include:
  1. Automatic notifications
  2. Bug tracking
  3. Gantt charts
  4. Interactive gantt charts
  5. Privacy settings
  6. Project templates
  7. Reporting
  8. Scheduling
  9. Task feedback
  10. Workload
  11. Dashboard
  12. Email integration
  13. Issue tracking
  14. Messaging or IM
  15. RSS feed
  16. Collaborative
  17. Issue tracking system
  18. Risk management capabilities
  19. Web application
- The tool has not got any remote capability features. It comes with the following resource management features:
  1. Time sheets
  2. Compare project
  3. Management software
- Teampulse has been developed with the view that all the clients, scenario and environment differ from each other.
- Large enterprises require a tool that is capable of scaling their workload. 
- Teampulse fits every scenario even though if you require some time to master it. 
- Firstly, you need to set up the project info, template, iterations. 
- Then you need to create your team and lastly take a view of the summary of your project. - It is recommended by the system to start with the stories.
- Being an enterprise tool, it has got many features which might make you feel like it might be quite complex to use. 
- But it is quite user friendly. 




Monday, December 24, 2012

What is IBM Rational Performance Tester?


What is a Rational Performance Tester?

- The Rational Performance Tester was developed by IBM to make the performance testing automated so that quality software could be delivered to the end users. 
- The performance tester is a great way to accelerate the performance of a software system or application without degrading its quality and this is what that matters the most. 
- This is a performance testing tool that has been developed exclusively for the identification of the cause as well as the presence of the software performance bottlenecks. 
- Using this tool, the testers can validate the scalability factor of most of the server and web based applications. 
- Further, it can be very well used for the creation of the tests in a code free style i.e., the need of programming knowledge is eliminated. 
- The test editor of this tool is quite rich in features and using it one can both the detailed and high level view of the performance tests. 
- The test data variation is automated here which means that the custom java code can be inserted possibly for making the test customization more flexible. 
With the rational performance tester the emulation of the user populations which are quite diverse in nature is possible. 
- Also, it provides you options for flexible modeling at the same reducing the processor and memory foot print.
- If any error is found, it is reported in the real time so that the performance problem can be recognized immediately. 
- The report is presented in the form of HTML web pages in the application window itself. 
- This tool collects server resource data and integrates it with the real time performance data from the application. 
- You can even perform load testing against a number of applications such as:
  1. HTTP
  2. SAP
  3. Sieble
  4. SIP
  5. TCP socket
  6. Citrix and so on.
- It works well on all the windows platforms plus Linux and z/OS. 
- The performance tests can be quickly executed. 
- You can determine what impact the load is having on your software system and application.
- The below mentioned are the basic features of the rational performance tester:
  1. Code free testing
  2. Root cause analysis tool: It helps you to dig out the root cause and diagnose it.
  3. Real time reporting
  4. Test data: It reduces the headache of generating the test data by providing you options for generating it from the scripts and data pools etc.
- It comes with a recording frame work which supports the recording of the tests consisting of a SOCKS or HTTP proxy, and service tests along with the socket recorder. 
- You have the privilege of making selections among the Firefox profiles that already exist. 
- Furthermore, it comes with a protocol traffic graph which gives a real time display of the amount of data that has been recorded. 
- There is a test annotation tool bar consisting of support for all the protocols and screen capture annotations. 
- With rational performance tester, it is possible to copy and paste the page elements in the HTTP tests. 
- The data correlation tools can be customized and written using the rules editor. 
- The automatic data correlation can be disabled either fully or partially. 
- All the references in a test can be viewed through a global view. 
- Now a wider variety of conditions can be implemented using the content verification points. 
- The fractional percentages are supported by the user groups. 
- Actions to be taken and messages to be logged can be specified for a particular condition in the case of error handling. 
- Reports can be made more presentable with the help of the custom verification points. 
- Java virtual machines can be used for collecting the resource monitoring data. 


Thursday, December 6, 2012

What is script assure technology in IBM rational functional tester?


Script assure technology is another key technology used by the IBM’s rational functional tester which is an automated tool developed for functional testing by the IBM’s rational software division. 

- Rational functional tester is usually employed by quality assurance people in order to carry out the regression testing. 
- The test scripts are created with the help of a sophisticated test recorder which includes capturing the actions of the users against the AUT or application under test. 
- From these captured actions, a test script is created by the recording mechanism which is based on .net or java applications. 
- When the version 8.1 of the rational functional tester was released, the scripts started to be represented as a series of the screen shots from a story board that is of a visual nature. 
- The created script is further enhanced using the syntax and standard commands of the language.
- These scripts are then run for the validation of the software system’s or application’s functionality. 
- To say the test scripts are executed in the batch mode so that the test scripts can be grouped together and executed unattended. 
- During the phase of recording, the verification points are introduced by the user for capturing the system with its expected state. 
- Any information regarding the bugs is stored in the logs of the rational functional tester. 
- While the play back process is in progress, an object map is used by the rational functional tester for finding and acting against the interface of the application. 
- However, it is possible that during the development phase the objects might be changed between the time that was taken for the recording of the script and for executing the script. 
- The script assure technology allows the rational functional tester to ignore the discrepancies between the definitions of the objects that were captured during the recording as well as the playback in order to ensure that there is an interrupted execution of the test scripts. 
- This is a factor called the script assure sensitivity which determines the size of the object map discrepancy that is acceptable and this factor can be set by the user. 
- It has been found that developing automated scripts for carrying out the regression testing of the dynamic content of the web pages such as GUI applications is difficult for the testers who are used to IBM rational functional tester.
- Testers tend to develop scripts that are quite re-silient and can be used for testing the values of the dynamic object properties which are not known to have sufficient unique properties even though having sufficient properties lead to problems in the recognition and thus leading to several failures and errors. 

If you understand properly that how the IBM rational functional tester works, its advantages, and how the objects can be recognized during the run time, you can very well develop the scripts that can be used to cope up with the changes and provide results of the regression testing that are informative enough. 
Many of the testers who are freshly introduced to the rational functional tester find the difficulties in the creation of the resilient scripts while simultaneously automating the web based applications. 

With the help of the script assure technology, the scripts can be subjected to play back in the rational functional tester by using the script assure feature which will also help in controlling the object matching sensitivity. 
Object matching sensitivity is a function that relies on a number of factors for the recognition of the objects present in that application. It is important that the properties that were recorded in the object map must match with the object properties so that the properties could be recognized by the rational functional tester.

It is by default that the rational functional tester might recognize an object even if some properties do not match. If a match is not found between the two properties, the object in the application cannot be recognized by the rational functional tester. 


Sunday, November 25, 2012

How is test management done by the test director?


If you are familiar with all the concepts of the test director you can apply them to your software systems or applications since you know how it works.
The test director implements the test management via four major phases as mentioned below:
  1. Specification of the requirements
  2. Planning the tests
  3. Execution of the tests
  4. Tracking the defects
Throughout each of the phases the date can be analyzed by the detailed reports and graphs generated earlier. Firstly, you need to analyze your software system or application and determine all of your testing requirements. 

Phase I - specification of Requirements

The first phase of the test director test management process involves the following steps:
  1. Examination of the documentation of the software system or application for the determining the testing scope i.e., test goals, strategies, objectives etc.
  2. Building of a requirements tree for defining overall testing requirements.
  3. Creation of a list of detailed testing requirements for each topic mentioned in the requirements tree.
  4. Writing a description for each requirement, assigning a priority level to it and adding attachments if required.
  5. Generation of the reports and graphs for providing assistance in the analyzation of the testing requirements.
  6. Carrying out a review of the requirements to check if they meet the specifications.

Phase II - Planning the Tests

The second phase involves the following tasks:
  1. Examination of the application, testing resources and system requirement for determining the test goals.
  2. Division of the application in to modules to be tested and building of a test plan tree to divide the application in to testing units hierarchically.
  3. Determination of the type of tests that are required for each module and adding a basic definition of each test to the test plan tree.
  4. Linking each test to the corresponding testing requirement.
  5. Developing manual tests where each test step describes the test operations and expected outcome. Deciding which tests are to be automated.
  6. Creation of the test scripts for the tests that are to be automated using a custom testing tool such as mercury interactive testing tools.
  7. Generation of the graphs and reports for the analyzation of the test planning data.
  8. Reviewing the tests for determining their suitability to the testing goals.

Phase III - Execution of tests

Third phase involves the following activities:
  1. Defining the tests in to groups so as to meet various testing goals of the project. This may involve testing a new version of the application or a specific function in it.
  2. Deciding which all tests are to be included in the test set.
  3. Scheduling the execution of the tests and assigning tasks to different application testers.
  4. Execution of the tests either manually or automatically.
  5. Viewing the results of the test runs for determining if a detect was detected in the application under test and generation of the reports and graphs for analyzation of the results.

Phase IV - Tracking the Defects

The last phase of the test management i.e., defect tracking involves the following activities:
  1. Submitting new defects detected in the software system or application. Defects can be added during any phase by QA testers, project managers and developers etc.
  2. Carrying out a review of the new defects and determining which ones are to be fixed.
  3. Correcting the defects that were decided to be fixed.
  4. Testing the new build of the software system or application and repeating the whole process until all the defects are fixed.
  5. Generation of the graphs and reports to provide assistance in the analyzation of the progress of the defect fixes and determining the date when the application is to be released. 


Thursday, November 22, 2012

How to track defects in Test Director?


A software system or application cannot be considered productive and useful if it is full of defects. Hence, it becomes quite essential that all the defects present in the software system or application are located and repaired appropriately during the development process.
The end users, testers and developers can submit the defects that are detected during all the phases of the testing process. The test director helps incredibly in the submission of the defects detected in the software and tracking of them till they are well repaired. 

Life-cycle of a Defect

- Whenever a defect is submitted to the test director project, there are certain stages through which the defects are tracked namely:
  1. New
  2. Open
  3. Fixed
  4. Closed
- Once a defect is fixed, the tester can either reject it or reopen it. 
- At the initial stage of reporting the defect to the test director, the status that is assigned to the defect is ‘new’ by default. 
- The defect is reviewed by either a quality assurance or a project manager and it is determined whether or not the defect is to be considered for repair. 
- If the defect is not considered for repair i.e., if it is rejected the status ‘rejected’ is assigned to it. 
- If the opposite case is there i.e., if the defect is accepted for repair it is assigned a repair priority by the project manager or quality assurance and the status ‘open’ is assigned to it. 
- This defect is now assigned to one of the members of the development team for repair. 
- The developer repairs the defect and then the status ‘fixed’ is assigned to it. 
- The software system or application is then retested, thus ensuring that there is no occurrence of the defect again. 
- Even if in case the defect recurs, it is again assigned a new status ‘reopened’ by the quality assurance or project manager. 
- But if the defect gets properly repaired the status ‘closed’ is assigned to it by any of the two managers. 
- The test director also provides the option of adding some new defects to an existing test director project. 
- All information and data regarding the defects found within a software system or application is handled by the defects module of the test director.
- The following are the defects that are undertaken by the defects module of the test director:
  1. Creation of the defects.
  2. Editing of the defects.
  3. Linking defects to each other.

Stages of Defect Tracking Process

The following stages are involved in the defects process:
  1. Adding of the defects
  2. Reviewing of the new defects
  3. Repairing open defects
  4. Testing of new build
  5. Analyzation of the defect data
- In the first stage of the defect tracking process, the new defects detected in the software system or application are reported. 
- In the next stage, the new defects are put through a review process and the defects to be fixed are determined. 
- In the third stage, the defects are corrected by the developers who have been assigned to do the task.
- In the fourth stage, the new build of the application is tested and this process is continued until and unless all the defects are repaired. 
- The last stage of the defect tracking process witnesses the generation of the reports so as to provide some assistance to the developers for the analyzation of the progress of repairing the defects. 
- Also, in this stage, the date of release of the software system or application is determined. 
- Later, it is determined whether to cancel the rejected defects or take action on them further. 
- The following productivity tools assist the process:
  1. Views
  2. Filters
  3. Sort
  4. Manage columns
  5. Favorites


Wednesday, November 14, 2012

How to start with Test Director?


Application testing is not an easy task, rather it is a very complex process but made easy with the test director. With the test director it has been possible to organize as well as manage all of the phases of the test director testing process. You can also specify the requirements of the testing process, plan the tests, execute them and keep an eye on the defects and so on.

Test director actually forms an organized frame work for the testing of the software systems or application before they are put under deployment. A central data repository is needed for the organization and management of the application testing process since the test plans have a way of evolving with any changes in the existing requirements or the new requirements. 

The following are the processes where you get a plenty of guidance from the test director:
  1. Requirements specification
  2. Test planning
  3. Test execution
  4. Defect tracking etc.
Basically the application testing process via test director consists of the following four phases:
  1. Specifying requirements
  2. Planning tests
  3. Running tests
  4. Tracking defects
First phase involves:
  1. Define the testing scope
  2. Create requirements
  3. Detail requirements
  4. Analyze requirements specification
Second phase involves:
  1. Defining testing strategy
  2. Define test subjects
  3. Define tests
  4. Creating the requirements coverage
  5. Designing the test steps
  6. Automating the tests
  7. Analyzation of the test plan
3rd phase involves:
  1. Creating test sets
  2. Scheduling the test runs
  3. Running the tests
  4. Analyzation of the test results
4th phase involves:
  1. Adding the defects
  2. Reviewing the defects
  3. Repairing the open defects
  4. Testing new builds
  5. Analyzing the defect data

How to start with Test Director?

- To start with the test director you need to go to your web browser and enter the URL of the test director in the test director options window. 
- When you press enter you will get a test director window.  
- If in case you are having some problem with opening the test director you can check whether or not the test director has been installed of the web server of your company.
- If it’s the very first time that you are running the test director, you will have to wait for a while so that the test director downloads and installs itself on your system. 
- If it had been a long since you last ran the test director the test director will update your version with the latest version. 
- Next, you will get a test director log-in window from where you have to log-in as a quality assurance tester. 
- While logging in you will asked to enter the domain and project name. 
- When you will be logged in you will get a test director window comprising of the following test director modules:
  1. Requirements module
  2. Test plan module
  3. Test lab module
  4. Defects module
All of the above mentioned test director modules possess some common elements namely:
  1. Test director tool bar
  2. Menu bar
  3. Module tool bar
  4. Tools button
  5. Help button
  6. Log out button
- One can start off straightaway giving the specifications for testing requirements in the requirements module in detail so as to provide a foundation for the rest of the application testing process. 
- All these requirements will be stated in the form of a requirements tree i.e., in a graphical form. 
- Once specified, the requirements need to be linked to the tests. 
- If the defects are found they are also be linked to the requirements which are responsible for those defects. 


Wednesday, October 17, 2012

Is there any problem in using scripts created on v6.0 to 6.5 or higher versions?


In some cases, it may happen that while trying to automate a java swing application using an early version of silk test such as the silk test 5.0.3. You found that the objects and controls in the application window of the application under test or AUT might not be recognizable by the silk test. 

This is just an example of problems of such category and at times you may wonder if the higher versions such as the silk test v 6.0 or silk test v 6.5 are suitable for automating your application or not? Or does the silk test comes with some extensions or add – ons as an alternate for overcoming such situations. 

The version 6.0 of the silk test is known to have some bugs in it, however, the Segue software has known to resolve these known issues. Actually, advancing form a lower version to a higher version of the silk test must not pose a problem. 
Though this is a general statement that we made on the basis of observation of several instances, it is not necessary that it should turn out to be true in all the cases. You may face some problems with the scripts that will work on an earlier but not on higher versions such as 6.0 and above because the object recognition patterns in both of them are not the same and vary from version to version. 

There are certain situations where the two paths of the script might be used for performing the same action but based up on the version. 
The silk test version 6.0 and silk test version 6.5 are somewhat similar and though no problems are experienced in advancing from version 6.0 to version 6.5 of the silk test. 

The various client forms of silk test are available such as those stated below:
  1. Silk test classic: This client of the silk test makes use of the domain specific language called “4test” for scripting of the test automation scripts. This language just like the C++ language is an object oriented language. Just like C++ it also makes use of the Object Oriented concepts such as following:
a)   Inheritance
b)   Classes and
c)   objects
  1. Silk 4J: This client of the silk test enables one to follow test automation by using java as the scripting language in eclipse.
  2. Silk 4 net: This client of the silk test also enables one to follow test automation by using VBScript or sometimes using C# as the scripting language in the visual studio.
  3. Silk test work bench: This client of the silk test enables the testers to carry out the automation testing using VB.net as the scripting language as well as on a visual level.
Below stated is the list of the silk test versions that have been released till now:
  1. Borland silk test 13- june 2012
  2. Micro focus silk test 2011 – November 2011
  3. Micro focus silk test 2010 R2 WS 2 – may 2011
  4. Micro focus silk test 2010 R2 – December 2010
  5. Micro focus silk test 2010 – july 2010
  6. Silk test 2009 – august 12, 2009
  7. Silk test 2008 SP1 – jusly 2008
  8. Silk test 2008 – april 2008
  9. Silk test 2006 R2 service pack 2 – September 2007
  10. Silk test 2006 R 2 service pack 1 – june 2007
  11. Silk test 2006 R2 – January 2007
  12. Silk test 2006 – September 2006
  13. Silk test 8.0 – may 2006
  14. Silk test 7.6 – September 2005
  15. Silk test 7.5 – june 2005
  16. Silk test 7.1 – October 2004
  17. Silk test 6.5 – November 2003
  18. Silk test 6.0 – November 2002
  19. Silk test 5.0.1 – September 1999
  20. QA partner 4.0 – November 1996


Saturday, August 18, 2012

What databases can Test Director reside on?


The test director is another popular software test management tool developed by the HP. The test director or TD as we call in short form has proved to be a very effective tool in the management of all the tests that are performed over a particular software system or application. 

What is the use of Test Director?

- The whole burden of the organization of the software testing process is taken up by the test director itself. 
- It helps in directing the process of software testing in the same way as a movie director takes up the responsibility for directing the shoot of a movie. 
There have been many cases in which the test director has considerably helped in the creation of the test cases.
- This software test management tool is categorized under the category of web based test management tools since it relies heavily on the web service for the management of the tests. 
- The working of the test director takes place around a centralized data base i.e., a centralized data base form its center of operation. 
- The test director helps in the management of the following aspects:
       1.   Specifications documentation
       2.   Requirements documentation
       3.   Test plans
       4.   Test cases
       5.   Defect tracking
       6.   Defect managing and so on.
- The original developer of the test director is the mercury interactive which is said to be taken under the HP. 
- It would not be wrong to say that the test director is actually a more of a central repository where all the information and reports related to the management of the bugs is stored and processed. 
- In some of the cases, the test director has even helped in the execution of the pre automated test scripts with the help of a special environment known as the test director environment.
- The purpose of the test director lies in the following tasks:
       1.   Creation of the test plans
       2.   Preparation of the test cases
       3.   Execution of the test cases.
       4.   Generation of the bug reports
       5.   Maintenance of the test scripts etc.

In this article we shall also see what all are the platforms that are supported by the test director.
The test director seems to be supporting almost all the platforms that call for a need for high levels of collaboration as well as communication among the testing teams that are distributed and where an efficient global testing process is required. 

Types of Databases on which TD reside

- Since the test director is a web based site administrator, it can reside only up on the data bases which are web based. 
- There is also one more reason behind this fact which is that the test director needs to monitor all the connected users, licenses and test director server information and this information it needs to store in such a data base that it can access from any computer system and therefore it uses a web based data base.
- Another point that can be given is that now most of the projects that are being done using the test director are grouped by the domain which consists of other related test director projects. 
- Such a domain can help in much efficient organization and management of your project! But to access it a centralized data base is required.
- There is one more thing that the test director comes with a feature called collaboration module via which one test director user can chat with another test director user and this also requires a web based centralized data base. 


Facebook activity