Subscribe by Email


Showing posts with label Changes. Show all posts
Showing posts with label Changes. Show all posts

Sunday, May 26, 2013

Ensuring that you have knowledge of changes in components that you integrate ..

Most modern day software applications use a number of components within them. The advantage of using components is that these components can get a lot of specialized functionality implemented through them, which would otherwise be difficult and hard to implement if the application development team wanted to write all this functionality. And even if the team was able to do all this, just maintaining all these specialized features can be a big problem. Consider the case of a software which needs to create a video that needs to be burnt onto media such as CD/DVD/Blu-Ray. Now, all of these are specialized features that are done pretty well by many components, both open source and commercial. However, if the product team wanted to write all this functionality as a part of their product, they would need to also make sure that they are maintaining all this code. Even for this specialized job, there are a lot of maintenance that needs to be done. The code for this area needs to be work on any new optical media that is introduced, such as when Blu-Ray's were introduced, and when new versions of Operating Systems for both Windows and Mac emerge, the features should work as well. For all this, it is a better strategy to ensure that components are used for specialized functionality.
However, there are some risks that come out when using components, and these risks can exist whether the component is from another group within the organization or from an external organization. One of the biggest risks is when the component is being used by multiple software applications, and your application does not have dedicated use of the component. In such cases, there is a need to be somewhat more informed about what the changes are that are happening in the component. Even if consider the above case about a component that allows writing to a DVD, another product that may be using the component may request for a change in the component for some need of theirs. And the component makers may decide that such a need is genuine and put in the required feature.
There is an even chance that the new feature that has been put in may not impact your application, but there is still a chance that it may have an impact on your feature workflow. For example, you may be using the component in a silent mode, whereby no dialog from the component may show up, but another organization requests a dialog to show up for a need of theirs. In such a case, you would need to ensure that you know about this, and the discussions with the component makers would need to ensure that there would be a way of ensuring that such a new dialog does not pop up in your workflow, such as a parameter that could be passed from your application code to the component which ensures that the new dialog does not pop up.
In all such cases, even though it is the duty of the component maker to have a list of changes in the component from the previous version, it is also your duty to check that none of the changes impact your application. And if you are integrating an open source component, you may not have the ability to get the external team to make any changes to their application, so you need to very carefully check their changelist and see the impact this has made on your application. Only if there is no impact should you incorporate a new version of the component in your application.
This is one of the risks of your project planning and depending on the number of components you use and the profile of those components, can be a high level of risk that needs to be checked at a regular interval.


Tuesday, March 12, 2013

What are autonomic systems? What is the basic concept behind autonomic system?


In this article we shall discuss about the autonomic systems, but before moving on to that we shall see a brief discussion regarding the autonomic computing. 

About Autonomic Computing

- Distributed computing resources have the ability of self–management. 
- This kind of computing is called autonomic computing and such systems are called autonomic systems. 
- Because of their unique capabilities, these systems are able to adapt to the changes that are both predictable and unpredictable. 
- At the same time, these systems keep the intrinsic complexities hidden from the users as well as the operators. 
- The concept of autonomic computing was initiated by IBM in the year of 2001. - This was started in order to keep a curb on the growing complexity of the management of the computer systems and also to remove any complexity barriers that prove to be a hindrance in development.

About Autonomic Systems

- Autonomic systems have the power to make decisions of their own. 
- They do this because of the high level policies. 
- These systems automatically check and optimize their status and adapt to the conditions that have changed. 
- The frame work of these computing systems is constituted of various autonomic components that are continuously interacting with each other. 
Following are used to model an autonomic component:
  1. 2 main control loops namely the global and the local.
  2. Sensors (required self – monitoring)
  3. Effectors (required for self-adjustment)
  4. Knowledge
  5. Adapter or planner
- The number of computing devices is increasing by a great margin every year. - Not only this, each device’s complexity is also increasing. 
- At present highly skilled humans are responsible for managing such huge volume of complexity. 
- The problem here is that the number of such skilled personnel is not much and this has led to a rise in the labor costs.
- It is true that the speed and automation of the computing systems have revolutionized the way world runs but now there is a need for a system that is capable of maintaining these systems without any human intervention. 
- Complexity is a major problem of the today’s distributed computing systems particularly concerning their management. 
- Large scale computer networks are employed by the organizations and institutions for their computation and communication purposes. 
- These systems run diverse distributed applications that are capable of dealing with a number of tasks. 
- These networks are being pervaded by the growing mobile computing. 
- This means that the employees have to be contact with their organizations outside office through devices such as PDAs, mobile phones and laptops that connect through wireless technologies. 
- All these things add to the complexity of the overall network that cannot be managed by human operators alone. 
- There are 3 main disadvantages of manual operating:
  1. Consumes more time
  2. Expensive
  3. Prone to errors
Autonomic systems are a solution to such problems since they are self – adjustable and do not require human intervention. 
- The inspiration or the concept behind the autonomic systems is the autonomic nervous system found in humans.
- This self – manageable system controls all the bodily functions unconsciously. - In autonomic systems, the human operator just has to specify the high level goals and rules and policies that would guide the management. 

- There are 4 functional areas of an autonomic system:
  1. Self–configuration: Responsible for the automatic configuration of the network components.
  2. Self–healing: Responsible for the automatic detection and correction of the errors.
  3. Self–optimization: Monitors and controls the resources automatically.
  4. Self–protection: Identifies the attacks and provides protection against them.
- Below mentioned are some characteristics of the autonomic systems:
  1. Automatic
  2. Adaptive
  3. aware


Thursday, March 7, 2013

What is meant by Holographic Data Storage?


Currently the conventional magnetic and optical data storage dominates the field of high – capacity data storage. But another technology called the holographic data storage holds the potential to lead this area. Now what is this technology? We shall discuss about it in this article. 

Difference between Conventional storage methods and Holographic Data Storage

- The conventional methods such as the optical storage and magnetic storage technologies depend up on the recording of the individual data bits on the distinct optical and magnetic on the medium’s surface. 
- In the holographic technology, the information is recorded throughout the medium’s volume.
- Multiple images can be recorded in the same medium area via utilization of the light at varying angles. 
- Further, in the conventional storage methods the recording takes place in a linear fashion whereas, 
- In holographic storage, millions of bits can be recorded and read in parallel thus increasing the rates of the data transfer more than what are offered by the conventional methods.

Features of Holographic Data Storage

Data Recording: 
- The information is stored in a thick photosensitive optical substance with the help of an optical interference pattern. 
- A laser beam is divided into two un-identical optical patterns (formed of light and dark pixels) and is projected towards the medium. 
- A multitude of holograms in one volume is recorded by making adjustments in the wavelength, reference beam angle, media position etc.

Data Reading: 
- For reading the stored data, the same reference beam is reproduced for the creation of the hologram. 
- This light beam is focused up on the photosensitive material that illuminates the interference pattern which leads to the diffraction of the light. 
- The pattern is then projected to the detector. 
- The data is then read in parallel at a rate more than that of millions. 
- This means high data transfer rate. 
- It takes less than 0.2 seconds for accessing the information from a holographic drive. 
- Holographic data storage has offered a solution to many companies for preserving and archiving the information. 
- The WORM (write – once, read many) approach provides assurance about the content security, risk of modification and overwriting of the information. 
- This technology provides hope for storage of data without degradation for 50 plus years which is quite more than the current options.
- However, if the same trend is followed, and it becomes possible to store data for 50 – 100 years in same format, that would still be irrelevant. 
- This is so because the format will be changing in less than 10 years.  

Types of Holographic Media

- Holographic media is of two types namely:
  1.  The re-writable media and
  2. The write once medium.
- In the former type, the changes can be reversed but in the latter can the changes are irreversible.

There is no point in the competition between the holographic storage and the hard drives since the former can find a market that is based up on the virtues like access speed. 
- The holographic data storage technology does seems to have a future in the video game market. 
In the year of 2009, the GE Global Research did came up with their own demonstration of the holographic storage medium that could stand the discs that had read mechanisms somewhat similar to that of the blu–ray disc players. 


Sunday, March 3, 2013

What is the need of Agile Process Improvement?


It is commonly seen that a number of change projects are designed and published but none of actually goes into implementation. Most of the time is wasted in writing and publishing them. This approach usually fails. We should stop working with this methodology and develop a new one. Below mentioned are some common scenarios in the modern business:
  1. Developing a stronger project
  2. Changing the people working on it.
  3. Threatening that project with termination
  4. Appointment of a committee that would analyze the project
  5. Taking examples from other organizations to see how they manage to do it.
  6. Getting down to a dead project
  7. Tagging a dead project as still worth of achieving something.
  8. Putting many different projects together so as increase the benefit.
  9. Additional training
-Drops in the delivery of the normal work always follow a change. 
-Big change projects are either dropped or rejected.
-It all happens because the changes introduced by such projects are mandatory to be followed.
-This threatens the normal functioning of the organization. 
-So, the organization is eventually compelled to kill the whole process and start with the old way of work again. 
-Instead of following this approach, a step by step process improvement can be followed that is nothing but the agile process improvement. 
Now you must be convinced why agile process improvement is actually needed. 
The changes needs to be adaptive then only the process will be balanced. 
- An example is the CMMI maturity level. It takes 2 years approx. for completion and brings in the following:
  1. Restructuring
  2. New competitors
  3. New products
-Only agile methods make these changes adaptive in nature.
-The change cycles when followed systematically produce results in every 2 – 6 weeks.
-Thus, your organization’s workload and improvement stay perfectly balanced. -The early identification of the issues becomes possible for the organizations thus giving you it a chance to be resolved early. 
-By and by the organization learns to tackle the problems and how to improve work.
-At the end it is able to adapt to the every changing needs of the business.
-The responsibility of the deployment and evaluation of the improvement is taken by the PPQA. 
-Whole process is implemented in 4 sprints:
  1. Prototyping
  2. Piloting
  3. Deploying
  4. Evaluating
-A large participation and leadership is required for these changes to take place.
-Some other agile techniques along with scrum can also be used in SPI.
-We can have the improvements continuously integrated in to the way the organization works. 
-The way of working can also be re-factored including assets and definitions by having an step by step integration of the improvements.
-Pair work can be carried out on improvements. 
-A collective ownership can be created for the organization. 
-Evaluations and pilots can be used for testing purpose. 
-In order to succeed with the sprints is important that only simple solutions should be developed. 
-An organization can write the coaching material with the help of the work description standards.
-This sprint technique helps the organization to strike a balance between the improvement and the normal workload. 
-In agile process improvement simple solutions are preferred over the complex ones.
-Here, the status quo and the vision are developed using the CMMI and SCAMPI. 
-Status quo and vision are necessary for the beginning of the software process improvement.
-SPI when conducted properly produces useful work otherwise unnecessary documentation has to be produced.
-An improvement in the process is an improvement in the work. 
-Improving work is what that is preferred by people. 


Monday, February 4, 2013

How are unit and integration testing done in EiffelStudio?


- Eiffelstudio provides a development environment that is complete and well integrated.
- This environment is ideal for performing unit testing and integration testing. 
- Eiffelstudio lets you create software systems and applications that are scalable, robust and of course fast. 
- With Eiffelstudio, you can model your application just the way you want. 
Eiffelstudio has effective tools for capturing your thought process as well as the requirements. 
- Once you are ready to follow your design, you can start building up on the model that you have already created. 
- The creation and implementation of the models both can be done through Eiffelstudio. 
- There is no need of keeping one thing out and starting over. 
- Further, you do not need any other external tools to go back and make modifications to the architecture. 
- Eiffelstudio provides all the tools. 
- Eiffelstudio provides round-trip engineering facility by default in addition to productivity and test metrics tools.
Eiffel studio provides the facility for integration testing through its component called the eiffelstudio auto test. 
- Sophisticated unit tests and integration testing suites might be developed by the software developers that might be quite simple in their build. 
- With eiffelstudio auto test the Eiffel class code can be executed and tested by the developer at the feature level. 
- At this level, the testing is considered to be the unit testing. 
- However, if the code is executed and tested for the entire class systems, then the testing is considered as the integration testing.
- Executing this code leads to the execution of contracts of attributes and features that have already been executed. 
- Eiffelstudio auto test also serves as a means for implementing the tests as well as assumptions made regarding the design as per the conditions of the contract. 
- Therefore, there is no need of re-testing the things that have been given as specification in class texts contracts by unit and integration testing through some sort of test oracles or assertions.

- Eiffelstudio auto test lays out three methods for creating the test cases for unit and integration testing:
  1. A test class is created by the auto test for the tests that have been manually created. This test class contains the test framework. So the user only needs to input the code for the test.
  2. The second method for the creation of the tests is based up on the failure of the application during its run time. Such a test is known as the ‘extracted’. Whenever an unexpected failure occurs during the run time of the system under test, the auto test works up on the info provided by the debugger in order to produce a new test case. The calls and the states that cause the system to fail are reproduced by this test. After fixing the failure, the extracted tests are then added to the complete suite as a thing that would avoid the recurrence of the problem.
  3. The third method involves production of tests known as generated tests. For this the user needs to provide the classes for which tests are required and plus some additional info that auto test might require to control the generation of the tests. The routines of the target classes are then called by the tool using arguments values that have been randomized. A single new test is created that reproduces the call that caused the failure whenever there is a violation of a class invariant or some post condition.


Thursday, January 31, 2013

Explain EiffelStudio? What technology is used by EiffelStudio?


For Eiffel programming language, the development environment is provided by the Eiffelstudio. Both of these – the Eiffelstudio and Eiffel programming language have been developed by Eiffel software. Presently the version 7.1 has been released.

- Eiffelstudio consists a number of development tools namely:
  1. Compiler
  2. Interpreter
  3. Debugger
  4. Browser
  5. Metrics tool
  6. Profiler
  7. Diagram tool
- All these tools have been integrated and put under the single user interface of the Eiffelstudio. 
- This user interface in turn is based on several UI paradigms that are quite specific to one another. 
- There has been done effective browsing through ‘pick and drop’ thing. 
- Eiffelstudio supports a number of platforms including the following:
  1. Windows
  2. Linux
  3. Mac OS
  4. VMS and
  5. Solaris
- This Eiffel software product comes with a GPL license. 
- However, a number of other licenses are also available. 
- Eiffelstudio falls under the category of open source development. 
- The beta versions of the product of the following release are made available to the public at regular intervals. 
- The participation of the Eiffel community in the development of the product has been quite active. 
- A list of the open projects has even made available on the origo web site. 
- The host of this site is at ETH Zurich. 
- Along with the list, information regarding the discussion forums, basic source code for check out etc. also has been put up. 
- In the month of June 2012, the last version 7.1 was released and the successive beta releases were made available very soon after that.

Technology behind EiffelStudio

The compilation technology used by the Eiffelstudio called Melting Ice is unique to the Eiffel software and is their trademark.
- This technology integrates the interpretation process of the elements with the proper compilation process. 
- This technology offers a very fast turnaround time. 
- This also means that the time taken for recompilation depends up on the size of the change to be made and not on the overall size of the program. 
Such melted programs even though can be delivered readily but still a finalization step is considered important to be performed before the product is released.
- Finalization step involves a very highly optimized compilation process which takes a long time but the executable generated is optimized.
- The interpretation in eiffelstudio is carried out through what is called the byte code-oriented virtual machine. 
- Either .NET CIL or C is generated by the compiler. 

History of Eiffelstudio

- The roots of the Eiffelstudio date back to when the Eiffel was first implemented by interactive software engineering Inc. 
- The Eiffel software was preceded by the interactive software engineering Inc. -The first implementation took place in the year of 1986. 
-The current technology used in Eiffelstudio evolved from the earlier technology called the ‘Eiffel bench’ that saw its first use in the year of 1990. 
- It was used along with the version 3 of the Eiffel programming language. 
- In the year 2001, the name Eiffel bench was changed to what we know now, the ‘Eiffelstudio’. 
- This was also the year when the environment was developed to obtain compatibility with the windows and a number of other platforms. 
- Originally, it was only available for Unix platform.
- Since 2001, Eiffelstudio saw some major releases with some new features:
  1. Version 5.0 (july 2001): The first proper version. Saw integration of the eiffelcase tool with the eiffelbench as its diagram tool.
  2. Version 5.1 (December 2001): Support for .NET applications. Also called the eiffel#.
  3. Version 5.2 (November 2002): The debugging capabilities were extended, an improved mechanism for C++ and C was introduced, eiffelbuild, roundtripping abilities etc. were added.



Wednesday, January 30, 2013

Give an overview of The diagram Tool of EiffelStudio?


Eiffelstudio is a rich combination of a number of development environment tools such as:
  1. Compiler
  2. Interpreter
  3. Debugger
  4. Browser
  5. Metrics tool
  6. Profiler
  7. Diagram tool
In this article we shall discuss about the last tool of Eiffelstudio i.e., the diagram tool. 

A graphical view of the software structures is provided by the Eiffelstudio’s diagram tool. This tool can be used effectively in both:
  1. Forward engineering process: In this process it can be used as design tool that uses the graphical descriptions for producing the software.
  2. Reverse engineering process: In this process it produces the graphical representations of the program texts that already exist automatically.
The changes that are made in any of the above mentioned two processes are given guaranteed integration by the diagram tool and this is called round trip engineering. 
It uses any of the following two graphical notations:
  1. BON (business object notation) and
  2. UML (unified modeling language)
By default the notation used is BON. The Eiffelstudio has the capability of displaying several views of the classes and their features. 
It provides various types of views such as:
1. Text view: It displays the full text of the program.
2. Contract view: It displays only the interface but with the contracts.
3. Flat view: It displays the inherited features as well.
4. Clients: It displays all the classes with their features that depend up on other class or feature.
5. Inheritance history:  It shows how a feature is affected when it goes up or down the inheritance structure.
There are a number of other views also available. There is an user interface paradigm that is based on holes, pebbles and other development objects and the Eiffelstudio relies heavily on this. 

Software developers using Eiffelstudio have to deal with abstractions that represent the following:
Ø  Classes
Ø  Features
Ø  Breakpoints
Ø  Clusters
Ø  Other development objects

- The way they deal with these things are same as that of the way in which the objects during run time are dealt by the object – oriented in Eiffelstudio.
- In Eiffelstudio, wherever a development object appears at the interface, it can be picked or selected irrespective of how it is visually represented i.e., what name is given to it and what symbol and so on. 
- To pick a development object you just have to right click on it. 
- The moment you click on it the cursor changes to pebble (a special symbol) that corresponds to different types of the object such as:
  1. Bubble or ellipse for class
  2. Dot for breakpoint
  3. Cross for feature and so on.
- As the position of the cursor changes, a line appears displaying the original position and current position of the object. 
- The object can be dropped at any place where the pebble symbol matches the cursor.
- An object can also be dropped in a window that is compatible with it. 
- Multiple views can be combined together to make it easy browsing through the complex structure of the system. 
- This also makes it possible to follow the transformations such as re-naming, un-definition and re-definition that are applied to the features while inheriting.
- The diagram tool of the Eiffelstudio is the major helping hand in the creation of the applications that are robust, scalable and fast. 
- It helps you to model your application just the way you want. 
- It helps in capturing your requirements as well as thought processes. 
- The tools of the Eiffel studio make it sure that you don’t have to use separate tools to make changes in the architecture of the system while designing.



Sunday, December 16, 2012

What are Six Best Practices in Rational Unified Process?


The IBM Rational Unified Process is a means of commercial deployment of the approaches and practices which have been proven for the development of the software systems and applications. It is based up on the following six best practices:
  1. Iterative development of the software systems and applications
  2. Management of the requirements
  3. Use of architecture based up on the components.
  4. Visual modeling of the software system
  5. Verification of the software system.
  6. Controlling the changes to the software system or application.
The above mentioned practices are called the best practices not because their value can be precisely quantified but because they are quite common in the software industry by most of the organizations which are successful and reputable.
In the rational unified process each and every member of the team gets templates, guidelines as well as the tools which are found necessary for the whole of the team in order to reap the full advantage.

Basic Practices In Rational Unified Process in Detail

Iterative development of the software systems or applications:  
Software systems and applications are quite sophisticated and therefore they make it impossible to define the problem first in sequence.
- By sequence we mean, first defining the whole problem, designing a solution of the problem, building the software system or application and then finally testing the software system. 
- In order to deal with such software systems and applications there is a requirement of an iterative approach so that an increase in the understanding of the problem can be made in a series of successive refinements. 
- This also helps in developing an effective solution in increments done over multiple iterations.

Management of the Requirements: 
The rational unified process gives a description:
- On how the elicitation, organization and documentation of the constraints as well as the functionality is to be done, 
- how the trade-offs and decisions have to be tracked and documented and 
how the business requirements are to be captured and communicated.

Use of architecture based up on the components: 
- The focus of the development process is on the base-lining and early development of an architecture that is robust and executable as well. 
- It gives a description of how a resilient architecture can be built with more flexibility that can accommodate the changes easily, can be easily understood and effectively promotes the reuse of the existing software artifacts. 
- The rational unified process provides a great support to the component based development. 
- By components we mean, the sub systems and non – trivial elements for a clear function.

Visual modeling of the software system: 
- The rational unified shows you exactly how a software system or application can be visually modeled and can be used for capturing the behavior and structure of its architectural components. 
- This further enables you to hide the details and develop the code with the help of the graphical building blocks. 
- With such visual abstractions, communication can be established between the different aspects of the software system or application.

Verification of the software system: 
- Poor reliability dramatically cuts down the chances of a software system or application from being accepted.
- Therefore it is important to review the quality concerning the factors namely functionality, reliability, system performance and application performance etc.

Controlling the changes to the software system or application: 
- The management and ability to track the changes are critical to the success of any software system or application. 
- However, the rational unified process helps you to cope with these issues also.




Monday, December 3, 2012

What is trace-ability alert? How to trigger a trace-ability alert in Test Director?


The process of sending e–mails in order to notify the ones that are responsible whenever some change is made to the project. This can be done by instructing the test director to create an alert whenever a change occurs and send e – mails appropriately. One’s own follow up alerts can also be added. 
There are certain rules called the trace-ability notification rules (based up on the associations that were made in the test director among the tests, requirements and defects) which are activated by the test director administrator for generating the automatic trace-ability alerts.

On what occasions a trace-ability alert issued?

Only for the following issues test director can generate the trace-ability alerts:
  1. Whenever a requirement (except change of a status) changes, the designer of the associated tests is notified by the test director.
  2. Whenever a requirement having an associated test changes, all the project users are notified by the test director.
  3. Whenever the defect status changes to ‘fixed’, the responsible tester of the associated test is notified by the test director.
  4. Whenever a test run is successful, the user assigned to the associated test is notified by the test director.

Steps to trigger trace-ability alert

  1. Log on to the project  as a different user.
  2. Click on the test plan tab to turn on the test plan module which will display the test plan tree. Expand the concerned subject folders and select the required test. A designer box displaying the user name in the details tab in the right pane is seen. One thing to be noted is that whenever an associated requirement changes, the trace-ability notification is only viewed by the designer.
  3. Click on the requirements tab to turn on the requirements tree and also make sure that it is in the document view.
  4. Among the requirements choose the one that you want to change.
  5. For changing the priority of the requirement click on the priority down arrow and select the required priority. This will cause the test director to generate an alert for the test associated with the requirement selected above. Also, an e – mail will be sent to the designer who designed this test.
  6. When you are done log out of the project by clicking on the log out button present on the right side of the window.

How to view a trace-ability alert?

This trace-ability change can be viewed for a single entity or all the entities in the project. Here by entity we mean a test, a defect or a test instance. To view the trace-ability alert follow the below mentioned steps:
  1. Log on to the project as the designer of the test.
  2. Click on the test plan tab to view the test plan tree. Expand the subject folders to display that test. You will see that the test has a trace changes flag which is an indication of the fact that a change was made to the requirement associated with it.
  3. Clicking on the trace changes flag for the test will enable you to view the trace-ability alert. Also, the trace changes dialog box will open up. Clicking on the requirement link will make the test director to highlight that particular requirement in the requirements module.
  4. For viewing all of the trace-ability alerts click on the trace all changes button in the common test director tool bar. A dialog box listing all the trace-ability changes will open up.
  5. Once done close the dialog box. 


Friday, November 30, 2012

What is a follow up alert? How to create follow up alerts in test director?


Test director is mercury interactive’s test management tool. 
- It helps in creating a quality assurance personnel plan and organizing the whole testing process usually termed as the test director testing process.
- It lets you build a data base consisting of the manual as well as automated test cases, test cycles, run tests, reports of the tracking defects and so on. 
- The test director instructs to create alerts automatically and notify the responsible people whenever some changes are encountered by the project. 
Alerts are generated for the changes that affect the project in one or more than one ways. 
- For generating automatic alerts, the administrator can activate the trace-ability notification rules based on the associations made among the requirements, defects and test.

What is a Follow Up Alert?

- Test director allows to add own follow up flag to a defect, test instance or a specific test so as  to remind oneself to follow up on an issue. 
- When the date of actual follow up arrives, an e – mail is send to the person’s mail box. 
- Test director notifies the tester by adding a trace that changes the flag to the changed entity or by mailing a notification whenever a change is made to the requirement, defect or test in the project. 
- Creating follow-up alerts is always useful since you are always reminded whenever it is required to follow up on some issues. 

Requirements for Follow up Alert

- Test director 8.0 should be installed on your system. 
- You must have access to all the four modules of the test director namely requirements, test plan, test lab and defects. 
- You must have either a sample project or an actual project on which you carry out the exercise.
- Work with a new copy of the project. 
- You should also have either a sample application or an actual application.  

Now we shall discuss the procedure to add a follow up flag to a defect whose status requires to be checked. 
When the follow up date comes, the flag icon is turned to red color and the test director sends a notification via e – mail. One thing that you should always take care of is that the flags have a specific user name which means only the user whose name is on the flag will be able to see it. 

Steps to create a Follow-up Alert

Follow the steps mentioned below to create a follow up alert:
  1. Click on the defects tab so as to turn on the defects module.
  2. From the defects grid select the defect for which you want to set up a flag to follow  up.
  3. Now for creating a follow up alert click on the ‘flag for follow up’ button and a flag for follow up dialog box will open up. Fill in the following details:
a)   Follow up by: select the date.
b)   Description: type:
Once done with filling up the details click ok. A flag icon will be added to the      defect record by the test director.
  1. To display the information bar for your follow up alert, double click with the follow up flag on the defect. A defect details dialog box will pop up and will display a yellow information bar with the follow up alert.
  2. To close the dialog box click on cancel button.


Facebook activity