Here are some similar sounding terms that confuse most people. We need to get more clarity on what the difference between these terms is, so here is an attempt to do that. The Difference Between Quality Assurance, Quality Control, And Testing, explained below.
A large number of people are confused about the difference between quality assurance (QA), quality control (QC), and testing. These terms are closely related, but they are essentially different concepts and we need to understand the difference between them. Their being difference does not minimize the importance of all of them, and since all three are critical to managing the risks of developing and maintaining software, it is important for software managers to understand the differences. The definition of these terms is below:
• Quality Assurance: Defined as a set of activities designed to ensure that the development and/or maintenance process is adequate to ensure a system will meet its objectives.
• Quality Control: Defined as a set of activities designed to evaluate a developed work product.
• Testing: The process of executing a system with the intent of finding defects, including all the necessary planning for the testing, and just does not mean the actual execution of test cases.
QA activities ensure that the process is defined and appropriate. Methodology and standards development are examples of QA activities. A QA review would focus on the process elements of a project - such as whether the requirements are being defined at the proper level of detail. In contrast, QC activities focus on finding defects in specific deliverables - e.g., are the defined requirements the right requirements. Testing is one example of a QC activity, but there are others such as inspections. Both QA and QC activities are generally required for successful software development.
There can be disagreements over who should be responsible for QA and QC activities -- i.e., whether a group external to the project management structure should have responsibility for either QA or QC. The correct answer will vary depending on the situation, but here are some suggestions:
• While line management can and should have the primary responsibility for implementing the appropriate QA, QC and testing activities on a project, an external QA function can provide valuable expertise and perspective, and his help is in most cases, beneficial.
• There is no right value for the the amount of external QA/QC, more like it should be a function of the project risk and the process maturity of an organization. As organizations mature, management and staff will implement the proper QA and QC approaches as a matter of habit and as a result, the need for external guidance reduces, with review being more relevant.
Tuesday, December 23, 2008
Difference between various software testing terms
Posted by Ashish Agarwal at 12/23/2008 10:57:00 PM 0 comments
Labels: Explanation, Processes, Terms, Testing
Subscribe by Email |
|
Monday, December 22, 2008
Stages of a complete test cycle
People who are involved in the business of software testing know many parts of the testing process, but there are few people who have covered all the stages involved from the time of getting the project requirements, to the last stages of testing. Here is a timeline of the steps involved in this process:
• Requirements Phase: Get the requirements, along with the functional design, the internal design specifications
• Resourcing estimation: Obtain budget and schedule requirements
• Get into details of the project-related personnel and their responsibilities and the reporting requirements
• Work out the required processes (such as release processes, change processes, etc.). Defining such processes can typically take a lot of time.
• Identify application's higher-risk aspects, set priorities, and determine scope and limitations of tests
• Test methods: This is the time to plan and determine test approaches and methods - unit, integration, functional, system, load, usability tests, etc., the whole breakup of the types of tests to be done
• Determine test environment requirements (hardware, software, communications, etc.). These are critical to determine because the testing success depends on getting a good approximation of the test environment
• Determine testware requirements (record/playback tools, coverage analyzers, test tracking, problem/bug tracking, etc.). In many cases, a complete coverage of these tools is not done.
• Determine test input data requirements. This can be a fairly intensive task, and needs to be thought through carefully.
• People assignment: This is stage where for the project, task identification, those responsible for tasks, and labor requirements all need to be calculated.
• Find out schedule estimates, timelines, milestones. Absolutely critical, since these determine the overall testing schedule along with resource needs.
• Determine input equivalence classes, boundary value analyses, error classes
• Prepare test plan document and have needed reviews/approvals. A test plan document encapsulates the entire testing proposal and needs to be properly reviewed for completeness.
• Once the test plan is done and accepted, the next step is to write test cases
• Have needed reviews/inspections/approvals of test cases. This may include reviews by the development team as well.
• Prepare test environment and testware, obtain needed user manuals/reference documents/configuration guides/installation guides, set up test tracking processes, set up logging and archiving processes, set up or obtain test input data
• Obtain and install software releases. If a daily build is available, the smoke testing regime for build acceptance needs to be brought in.
• Perform tests. The actual phase where you start to see the results of all the previous efforts.
• Evaluate and report results
• Track problems/bugs and fixes. This phase can take up a substantial portion of the overall project time.
• Retest as needed, including regression testing
• Maintain and update test plans, test cases, test environment, and testware through life cycle
Posted by Ashish Agarwal at 12/22/2008 11:12:00 PM 0 comments
Labels: Cycle, Phases, Testing
Subscribe by Email |
|
Tuesday, December 16, 2008
Types of testing
What are the different types of testing that one normally comes across ? If there are others besides these, please add in the comments.
• Black box testing - This is a testing method that is not based on any knowledge of internal design or code. Tests are based on requirements and functionality.
• White box testing - based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, conditions. This is more like testing based on code, and is typically handled by a person who has knowledge of coding.
Black box and White Box testing are the 2 most well know types of testing.
In addition, there are testing carried out at different stages, such as unit, integration and system testing.
• Unit testing - the most 'micro' scale of testing; to test particular functions or code modules. This is testing that happens at the earliest stage, and can be done by either the programmer or by testers (further stages of testing are typically not done by programmers). Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses. It could also be used to denote something as basic as testing each field to see whether the field level validations are okay.
• Incremental integration testing - this stage of testing means the continuous testing of an application as and when new functionality is added to the application; the testing requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; this testing is done by programmers or by testers.
• Integration testing - This form of testing implies the testing of the combined parts of an application to determine if they function together correctly. When we say combined parts, this can mean code modules, individual applications, client and server applications on a network, etc. Integration testing can reveal whether parts that seem to be well built by themselves work properly when they are all fitted together. Integration testing should be done by testers.
• Functional testing - Functional testing means testing of Black-box type testing geared to functional requirements of an application; functional testing should be done by testers. Functional testing is geared to validate the work flows that happen in the project.
• System testing - System testing is a black-box type of testing that is based on testing against individual overall requirements specifications; the testing covers all combined parts of a system and is meant to validate the marketing requirements for the project.
• End-to-end testing - End to end testing sounds very similar to system testing just with the name itself, and is similar to system testing. The testing operates at the 'macro' end of the test scale, at the big picture level ; end-to-end testing involves testing of the complete application environment in a situation that simulates the actual real-world use, the final use, (example, interacting with a database, using network communications, or interacting with other dependencies in the system such as hardware, applications, or systems if appropriate).
• Sanity testing - Sanity testing, as it sounds like, is typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. This sort of testing could also happen on a regular basis to ensure that regular builds are worth testing. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state. Sanity testing is not supposed to be a comprehensive testing.
• Regression testing - Regression testing plays an important part of the bug life cycle. Regression testing involves re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle, but there should never be an attempt to try and minimise the need for regression testing. Automated testing tools can be especially useful for this type of testing.
• Acceptance testing - Acceptance testing, as the name suggests, is the final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time. This type of testing can also mean the make or break situation for a project to be accepted.
• Load testing - Again, as the name suggests, load testing means testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails. This is part of a system to ensure that even when a system is under heavy load, it will not suddenly collapse, and can help in infrastructure planning.
• Stress testing - Stress testing is a term often used interchangeably with 'load' and 'performance' testing. Stress testing is typically used to describe conducting tests such as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.
• Performance testing - Performance testing is a term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans. Performance testing is also used to determine the time periods involved for certain operations to take place, such as launching of the application, opening of files, etc.
• Usability testing - Usability testing is becoming more critical with a higher focus on usability. Usability testing means testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. This is done ideally through the involvement of specialist usability people.
• Install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes. Given that the first thing users see is an installer, how the installer works, whether people are able to get clarity, and so on are some of the measurements through installer testing. In addition, the install / uninstall / repair etc should work smoothly.
• Recovery testing - One does not like to anticipate such problems, but given that crashes or other failure can occur, recovery testing measures how well a system recovers from crashes, hardware failures, or other catastrophic problems.
• Security testing - Security testing is getting more important now, with the focus on increased hacking, and security measures to prevent data loss. Security testing determines how well the system protects against unauthorized internal or external access, willful damage, etc and may require sophisticated testing techniques.
• Compatibility testing - Compatibility testing determines how well the software performs in a particular hardware/software/operating system/network/etc environment.
• Exploratory testing - This type of testing is often employed in cases where we need to have a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it, common in situations where the software being developed is of a new type.
• Ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.
• User acceptance testing - determining if software is satisfactory to an end-user or customer. Similar to the acceptance test described above.
• Comparison testing - Comparison testing means comparing software weaknesses and strengths to competing products, very important to evaluate your market, and to determine which are the features you need to develop.
• Alpha testing - testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.
• Beta testing - Also called pre-release testing, it is the testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers. The advantage is that you can test with users, as well as get verification about software compatibility on a wide range of devices.
• Mutation testing - a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources.
Posted by Ashish Agarwal at 12/16/2008 06:43:00 AM 0 comments
Labels: Techniques, Terms, Testing, Types
Subscribe by Email |
|
Tuesday, December 9, 2008
Some testing definitions
Some definitions of key testing terms:
What is software 'quality'?
Trying to attain software quality implies being able to meet the following goals: reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable. It is not easy to objectively define quality. It will depend on who the 'customer' is and their overall influence in the scheme of things. if you were to take a holistic view of the customers, you would involve the following people: end-users, customer acceptance testers, customer contract officers, customer management, the development organization's management/accountants/testers/salespeople, future software maintenance engineers, stockholders, magazine columnists, etc. Each type of 'customer' will have their own slant on 'quality' - the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free.
What is the 'software life cycle'?
A software life cycle is one of the most popular terms that a person working in software is expected to know. The life cycle begins when an application is first conceived and ends when it is no longer in use. The various in-between parts of the life cycle is enough to fill a separate book, but as a first level, the terms includes aspects such as initial concept, requirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, retesting, phase-out, and other aspects.
What is 'Software Quality Assurance'?
Software QA involves the entire software development paradigm from beginning to end - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention', to ensuring that such processes are defined that make it more difficult to get into problems.
What is 'Software Testing'?
When a software is written by developers, it is a given that there will be sections that will not be working properly. Testing involves operation of a system or application under controlled conditions and evaluating the results (eg, 'if the user is in interface A of the application while using hardware B, and does C, then D should happen'). The controlled conditions should include both normal and abnormal conditions, and can cover a wide gamut of activities. Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should. It is oriented to 'detection'.
Posted by Ashish Agarwal at 12/09/2008 11:39:00 PM 0 comments
Labels: Explanation, Learn, Terms, Testing
Subscribe by Email |
|
Thursday, December 4, 2008
Testing Strategies/Techniques
Some of the testing strategies that need to be kept in mind.
• Black box testing should make use of randomly generated inputs (only a test range should be specified by the tester), to eliminate any guess work by the tester as to the methods of the function
• Data outside of the specified input range should be tested to check the robustness of the program
• Boundary cases should be tested (top and bottom of specified range) to make sure the highest and lowest allowable inputs produce proper output
• The number zero should be tested when numerical data is to be input
• Stress testing should be performed (try to overload the program with inputs to see where it reaches its maximum capacity), especially with real time systems
• Crash testing should be performed to see what it takes to bring the system down
• Test monitoring tools should be used whenever possible to track which tests have already been performed and the outputs of these tests to avoid repetition and to aid in the software maintenance
• Other functional testing techniques include: transaction testing, syntax testing, domain testing, logic testing, and state testing.
• Finite state machine models can be used as a guide to design functional tests
Posted by Ashish Agarwal at 12/04/2008 12:44:00 AM 0 comments
Labels: Techniques, Testing
Subscribe by Email |
|
Sunday, November 16, 2008
Different types of end-user metrics
Typically, metrics in software are meant to mean something related to capturing lines of code, number of defects, or some other measurements related to software quality. User metrics is something entirely different, meant to try to capture data relating to the customers usage of the software. The advantages that you get out of such capturing end user software metrics are:
1. Capturing such metrics helps in validating design principles: Say both product management and engineering feel that the implementation of a particular feature is the defined way to do it, but after looking at user-functionality usage metrics, you find that the usage of the feature is lower, then some re-thinking needs to happen.
2. Helps in capturing problem areas: Suppose there is logic in the system that it reports every time that the application gets into an unstable or incorrect position (for example, if there are some parameters being passed to a functional that was deemed unlikely), then you know that there is additional work to be done in these areas.
3. Finding priority areas: Reviewing user-functionality usage graphs can be help you find which are the functions that are the most used of the application. Sometimes, product teams get surprised when they review the usage metrics for their applications. Supposed you are the team for a photo editing software, and you expect that some of the editing tools would be the most popular; and then you find to your surprise that a simple tool such as a Quick Fix tool is the most popular. Such data helps you determine the areas that are important to users, and what you need to fcous on.
4. Determining poorly designed workflows: If you have metrics in place that determine how long a users takes to complete a particular workflow as well as how many people have abandoned the workflow, you get an idea of how simple vs. badly designed a workflow is. A poor workflow would leave users spending far more time on that workflow, or even pressing 'Cancel' to leave the function in frustration.
5. Determining which features to drop. Sometimes there are features in a software that are built with a lot of hope, and do not live up to the promise. In addition, there are other features that have outlived their usefulness and need to be dropped as part of a periodic cleaning of the application. Such end user metrics can help a lot in determining the features that need to be removed.
6. Determining license usage. In the case of softwares that are available as floating licenses or as enterprise softwares, capturing of usage information helps in determining whether the pricing of such software is correct. Such metrics also help in determining whether there is any violation of licensing terms.
Posted by Ashish Agarwal at 11/16/2008 04:21:00 PM 0 comments
Labels: Metrics, Planning, User
Subscribe by Email |
|
Saturday, November 15, 2008
Product Development: Planning for metrics
What are metrics ? Metrics are becoming an increasingly important part of capturing all sorts of information about the product and about the interaction of users with the software system. If you are looking for a definition, here it is: A software metric is a measure of some property of a piece of software or its specifications. Let me give you a common place application: Microsoft has touted Vista as their most stable and secure Operating System. To validate whether Vista is indeed a stable operating system across the wide ranging hardwares with their users, they need a way to measure how often the system crashes. And hence, you need a crash reporting log that sends back information to Microsoft whenever the operating system crashes (or if this is not possible, whenever the system recovers from a crash). To include user benefits, when this information is reported, the company could build in a system that would also try to analyse the reasons for the crash and provide a crash (this worked for me when the system was able to let me know that a new monitor driver was available).
There are all sorts of metrics (and we will talk about some of these metrics in the next post), this post is more about planning for metrics during the project kickoff stage. How do you plan for metrics during this initial phase ?
- The product management team and engineering need to work out which are the metrics they want to capture (these could be in the nature of: different functions launched by the user within the application; the average time that the user keeps the application alive; and so on)
- The tool to be used for this purpose needs to be evaluated (this may be a tool user by other applications within the company, or an external tool may be needed for this purpose)
- If the tool needs to be purchased, then the funding for the tool needs to be planned
- Effort estimation for the metrics tool needs to be planned. The effort estimate for the metrics tool needs to include the effort estimate for making the hookups to the tool inside the application dialogs, as well as the effort needed for testing of the application metrics.
- Legal approval. In many cases, a metrics tool that is well integrated with the software application needs to capture user steps and workflows, and may involve privacy issues. Hence, it is important that any such effort to capture metrics has been validated with the legal team, and the Terms of Use / License has been appropriately amended.
Posted by Ashish Agarwal at 11/15/2008 09:57:00 PM 0 comments
Labels: Development, Effort, Legal, Metrics, Planning, Product
Subscribe by Email |
|
Wednesday, October 29, 2008
Product Development - Prerelease planning
During a product development cycle, getting inputs from users is of great importance. It is not so easy for a development team to get inputs so easily. One way is to find focus groups or do other usability studies where groups of users from the desired customer groups are quizzed about their needs and their workflows, and based on that, user workflow design is made. However, such studies are many times one-time only, and it is hard to do such usability studies on a continuous basis to get frequent guidance to the development team. One option for works fairly well in such scenarios is to get a pre-release / beta program where people from the target customer base interact with the team on a regular basis. They would do this under a NDA (Non Disclosure Agreement) where the fear of details of the product being revealed to the public are far reduced.
A pre-release program needs to be planned thoroughly, ideally when the project planning is happening. Some of the factors that needed to be planned as a part of this are:
1. At what stage should the pre-release program start ? Right at the start of requirements detailing, where pre-release participants can review snapshots of how the workflow would look like and provide details ? If this can be achieved, then it would make sense to do this through the development cycle. The team can get quick feedback of user reaction to proposed designs.
2. What is the number of participants that should be part of this program ? This is a question for the product team, but given that only a percentage of people signed up for a pre-release program actually provide useful feedback over a period of time, the team should plan to sign up more people than required.
3. What are the features that should be exposed to users through a pre-release program ? Features should be selected that are new, features where there are extensive usability issues, and features where there is a lot of doubt about the proposed implementation.
4. How should the users be exposed to features ? The normal process is that in the initial stages, pre-release users would be exposed to mockups that detail the workflow of the features, and as time goes by and builds start getting made with the features being implemented, the pre-release users would also start getting exposed to these builds and features and can test them with their own perspective and provide useful feedback.
5. Actual process for taking feedback: Typically when pre-release users get going, they can generate a significant amount of feedback over the cycle of the product development cycle, in terms of feedback on features, new feature requests, and bugs. All of them need to have their own channels for capturing such feedback; with people on the product team being deployed for monitoring the feedback, a separate process for making sure that bugs logged by users move into the bug tracking software used by the product team and the pre-release users are able to view the progress of these bugs.
6. How many ratings of the product: It is always useful to get users to rate the product along with allowing the pre-release users to provide open-ended feedback of the features in the product. If you do this a number of times in the cycle, it provides the product team with a view of which features most appeal to the users; it also give the product management team a list of features that can be taken for the next version.
There would be many other advantages as well, please provide these via comments when you know of other advantages of a robust pre-release system. For example, getting the pre-release system in place allows testing of the product across a broad range of devices and operating systems.
Posted by Ashish Agarwal at 10/29/2008 04:00:00 PM 1 comments
Labels: Development, Feedback, Prerelease, Product, User
Subscribe by Email |
|
Product Development - Actual Kickoff of Development Effort
During every product development cycle, there are 2 distinct stages that you reach in the cycle. There is an initial phase when the planning happens, and there is a later stage when the actual work happens with respect to the actual design and development effort. These are very broad level definitions, with many details and finer points.
- Planning stage: This is the stage where the team defines with broad strokes, the details of what the product will look like in terms of features. The general direction of the product, the features that will make it to the release, the schedule of the release, all of these are decided in this stage. In addition, if the company is structured in such a way that the configuration group, installer and release teams, internationalization teams, and other central technology groups are separate teams, then they will need to be signed up.
- Implementation stage: In this stage, the teams get working on actual implementation of features. By this time, the product team has already decided on the features, their priority, and schedule. The core of the development phase, which can be translated into requirements breakup and detailing, user design, engineering design, test case and test plans, coding, testing, and release all happen in this stage.
In between these 2 distinct phases of the project, there is a need to get an actual kickoff. If the product is such that it needs an approval from executive management, and also to get all the supporting team on the same discussion stage, a typical kickoff meeting is called. In this meeting, the project manager / program manager gets all the stakeholders (including the product team, management, executive sponsors, supporting teams, product management, etc) onto the same room and presents the schedule, final feature list, commitments of support, languages, pre-release plans, issues and risks and so on.
Such a meeting typically takes around a month to plan, since you need to get a meeting time right which is convenient for all the attendees, you need to prepare a presentation to run in the meeting, and so on. A kickoff meeting is the actual time when the sponsors say yes to the product development phase proceeding, at the same time, there is also a chance that the executive management team may need clarifications or may even ask the team to go back and review their plans.
Posted by Ashish Agarwal at 10/29/2008 11:28:00 AM 0 comments
Labels: Approval, Development, Planning, Product
Subscribe by Email |
|
Saturday, October 11, 2008
Product Development - planning a minor / dot release - Challenges
In the last post, I had talked about why a minor (dot) release is needed, as well as some of the reasons as to why doing a dot release is an inconvenience. However, if the decision has been made to do a dot release, then it is necessary to understand the process of planning and executing a dot release, including some of the difficulties (challenges) that emerge in such a minor release.
What are some of the activities that need to be done ?
1. You need to finalize the changes that need to be a dot release. If this release is because of some known changes, then the changes need to be analysed, and the engineering response (and design) needs to be worked out.
2. If the changes are still unknown, for example, if your release is failing some security tests or some certification, then you need to figure out what can be done. If you take an example where the earlier release is failing some new certification norms (and for those who know how certification works, it can be a lot of effort to prepare the infrastructure and execute all the cases. As an example, you may need to take the help of tools for certification, and those tools may need an upgrade of memory. In other cases, the amount of testing required may be huge, and the actual calendar time needed may stretch the schedule of the dot release.
3. For companies where a lot of work is handled by support teams (configuration, build, release, internationalization, etc), the required overhead of handling all those teams and getting their support takes both budget allocation and time.
4. Too many minor releases causes cynicism in the market about the initial stability of the software release. For example, Microsoft releases many service packs for its software, and there are many people who do not migrate onto a newer version until they see 1 or 2 service packs because they would rather wait for some of the important bug fixes.
5. You run into issues where there are multiple releases for around the same version (with say versions 8.0, 8.1, 8.2, etc for a product). Once you have multiple versions, you get into support issues whereby issues are different for these versions, and support may be a nightmare.
6. Newer dot versions, in many cases need to be deployed to the retail channel (replacing the software already in the retail channel), and to the web store (including to many online software sellers who resell the software). All of these steps involve huge logistical nightmares and / or costs.
7. Branching strategies. Getting a dot (minor release) in process needs major configuration issues (getting branches in place, especially when work is also going for the next release), and takes a fair amount of explanation to both the Development and Quality teams.
Of course, if you folks can suggest more issues that make minor releases a challenge, please add comments.
Posted by Ashish Agarwal at 10/11/2008 09:43:00 AM 0 comments
Labels: Challenges, Development, Dot Release, Minor Release, Problems, Product
Subscribe by Email |
|
Saturday, October 4, 2008
Product Development - what is a minor / dot release
For those of you unacquainted with the concept of a minor release (I will also call it a Dot release from time to time), it is a release that you do after the main release has been done. If that did not make too much sense, let me try again ! Suppose you have released version 8.0 of Microsoft's Visual Studio to great fanfare, and with much expectation that this is 'the' release, perfect in all ways. Once you release 8.0, the overall grand plan would be to sell 8.0, and have the team working on developing 9.0 with no constraints.
Let me tell you a little secret. As you move closer to the release date of a product version, you (and in fact, most of the team members), start entering a time of partially restrained panic and impatience. It is also a normal decision that as you get closer to the release date, bugs that would have been fixed by the project team even a couple of months back will not be fixed now, unless they are deemed critical. The primary reason for this customer unfriendly move ? Simple Murphy's Law - 'If anything can go wrong, it will' ! Any bug fix has a cost - the fix needs to be reviewed, and then the quality team needs to test the impacted area of the bug. If something goes wrong with the bug fix earlier in the cycle, you have time to fix the impact. However, as you get closer to your release date, this buffer time is no longer available. As a result, almost every product team takes a call that as you get closer to the release date, bugs need to be more and more critical before they can be fixed. Thus, the product team may opt to not fix bugs that could potentially impact customers, but if the bug is found say a week before release, the bug would be classified based on whether you would stop the release for such a bug. Sadly, very few bugs make the criteria of being a release-stopper.
Once you have released version 8.0, there will be bugs that will come in from the field (including bugs that the product team had decided should not be fixed). Also, there may be other changes that are required in the release, and which cannot wait for version 9.0. And thus the stage is set for a generation of release of Visual Studio 8.1 (now you know where the term dot comes from, since this minor release is actually 8 'dot' 1). Such releases follow a few basic points:
1. Bugs (whether older bugs, or new bugs generated by customers) are evaluated as to whether they are deemed important enough to fix; these are the only bugs that are selected for inclusion in the dot
2. Typically, when a dot release is planned, the release supplants the earlier release. So, following the above example, new customers will get release 8.1 of Visual studio vs. release 8.0 that was released earlier
3. A dot will typically have all the activities of a full release, the only advantage being that the time frame is drastically reduced. A reverse is that the inter-group coordination that had more time available has to be compressed into a shorter time frame
4. A dot inconveniences everybody, since it takes much more attention to do everything in a shorter time period; further, it takes away resources (including team management attention) from working on the next release
5. Reaction time is much less than in a normal release
Posted by Ashish Agarwal at 10/04/2008 11:56:00 PM 0 comments
Labels: Development, Dot Release, Product
Subscribe by Email |
|
Sunday, September 28, 2008
Product Development - Rolling out the patch
The previous post had thoughts on what are some of the conditions under which a software patch is typically required. Once the decision has been made to make a patch, there are a set of activities that are needed to be done. A patch typically is a miniature version of a complete product development cycle, not with all the activities, but certainly having many of the steps that one needs to carry out in the typical cycle. What are some of these steps that are needed to be done ?
- Decide on which of the fixes need to make it to the patch: When preparing a patch, there is always the temptation to fix all the issues that have been found after the release. However, beware of feature creep. You should only include fixes on the patch that are deemed critical. Putting other fixes means that the time period for the patch becomes longer, it means that QE needs to out more effort to test the patch, and so on.
- Make sure that all the teams are signed up for this patch. For a moderately sized software company, there will be multiple teams (with their specialized functions) that are needed to make the patch. These include the development team, quality testing team, configuration team that is responsible for the installer and release, localization team (if there are multiple languages involved - this is another decision point, whether the patch needs to be rolled out for all languages), marketing teams to ensure that patch publicity plans have been rolled out, web teams (if the company website is maintained by another team, they need to involved so that they can schedule the patch rollout date into their plans)
- Decide the schedule of the patch. Even though this is an abbreviated release, it will still take time, and the further this time pushes into the schedule for the next release, the more impact that this patch will have in terms of creating problems for the next release.
- Decide on branching strategies. Typically teams need to make sure that the source code repository has a proper branch created to handle the new branch for the patch. In most cases, the entire team will not be involved in making the patch, and so the work being done by the rest of the team for the new version will be on the main branch, and the work being done by the engineers working on the patch will be on the separate branch. A strategy also needs to ensure that the defects being fixed on the patch will be integrated with the main branch once the work on the patch is done.
- At the time the work on the patch is being carried out, if the patch is due to some major customer issues or some security issues, then the marketing and management of the team need to decide whether to release news of an upcoming patch. Typically, news of patches are not released earlier, but sometimes such news can help in calming down customers who would otherwise be worried.
- Development work for the fixes that go into the patch will need to be done. This typically involves a thorough engineering investigation into the issues that need to be fixed as well as working with outside partners if the issues needs such cooperation
- Once the fixes are made, they need to be thoroughly tested by the QE team on the different platforms on which the software is supported
- Now the patch is ready, and needs to be rolled out through the different media that can be used. The patch can be mentioned on the company web site, on the product page, in customer support forums, to product users directly if there is the ability to send software users a notification inside the product, via email to all the registered users, show as an update to users if the app has the ability to do an automatic check for updates.
The next post will talk about minor (dot) releases and the difference between a patch and a dot release.
Posted by Ashish Agarwal at 9/28/2008 12:54:00 AM 0 comments
Labels: Patch, Problems, Product
Subscribe by Email |
|
Thursday, September 25, 2008
Product Development - Doing a patcher or a minor release
When doing the planning for a new version of the product (not typically applicable when creating a new product), it sometimes becomes necessary to consider the case of doing a patcher or a minor release (also known as dot release).
Picture this scenario - your product development team has already started planning for the features that will be implemented in the new cycle. They are trying to get an initial SWAG (literally means a Wild Ass Guess, but is typically used to define when experienced development managers and engineers do a rough guess of the amount of resources it will take to develop a given feature), are working with Product Management and User Design to get some more details of the initial feature requirements such that a more accurate estimate (but still rough estimate) is available. In the middle of this, if they are suddenly confronted with the need to plan for doing a patch or a dot (minor) release, it can take away a fair number of resources and affect the schedule for the next version.
So why does a team end up doing a patch or a minor release (dot release) ? Let's talk about a patcher first.
Reasons for doing a patcher:
- Easiest reason. After releasing the product, you discover a bug (through internal testing or from customers) that is liable to affect a number of customers. So, suppose you are doing a printing application, and you find that printing of Excel documents does not happen properly in common cases, this is something that cause you to do a patch. There is a high priority that a number of customers will get affected by such a problem.
- Crashes. A slightly more difficult situation where a number of customers have reported problems where the application suddenly crashes. On diagnosis, you find the root cause of the problem and decide that the risk of customers getting the crash is not very low, or there is a high risk of important data loss when the crash does happen. This is very important for certain classes of applications such as financial and enterprise applications.
- Customer / Tech support issues. There are a variety of issues that are not important enough to warrant a patcher on their own, but together they are causing problems with a high frequency of customer support issues and tech forums posts. Because all of these have a cost (and a lowered reputation because of many customer posts on forums is not easy to recover from), it is sometimes important to admit that there were mistakes made and release a patch.
- Device dependencies. Support you are a maker of a photo software, and many new cameras are released (or you discover problems with some existing cameras), it is important to project an image that you are responsive to developments in your field and are providing updates to your customers.
- Adding some important new operational functionality. This is a slightly tricky area since the recommendation is not implement new functionality in a patch, but if you have some important new functionality that needs to be implemented in as many existing customer apps as possible, then this is something that seems permissible. For example, if you get a new component that allows you to provide customers with a new and easier update mechanism, it makes sense to provide such a functionality through a patch available for wide download and use
- Dependencies on other software. Nowadays most large software applications also internally use other software components. For example, video players will use external codecs, CD burning software will use device burning codecs from Nero or Sonic, and many other such examples. If you find that your dependency is going to cause problems unless you install a newer version of the external component, a simple way is to incorporate such changes in a patcher.
- Competition. Your competition is going to release a new version of their software that gives them a competitive advantage, and your development process means that you cannot release a new version quickly enough. A workaround is to make the changes in a patch and make that available to your customers.
This post is turning out to be longer than I thought, so the process for deploying patches as well as the development of minor releases will be covered in the next post ..
Posted by Ashish Agarwal at 9/25/2008 11:27:00 PM 0 comments
Labels: Development, Dot Release, Minor Release, Patch, Product
Subscribe by Email |
|
Monday, September 22, 2008
Product Development - Requirements Planning - Template
In the previous 2 posts, I have been talking about the process leading to a presentation, right at the beginning of the project, where product management presents their plan of feature implementation, revenue and pricing figures, competition analysis, and so on. The objective of such a meeting is to ensure that management / executive sponsors know the direction that the product will take, are able to satisfy their doubts, and then can either bless the direction taken by product management or send the team back to the drawing board for re-making the strategies OR do some slight tweaking.
For this information to be presented to management, it has to be packaged in a proper template where the information is arranged in logical order. I have tried to find templates that can meet these needs on the internet, and some of them are below. You may need to modify or tweak them slightly to suit them as per your needs.
1. Startup business plan at score.org (link): This is a template more for a business plan, but it contains some very relevant questions that you need to answer for the purpose of creating a new product; you can take the relevant questions from this template and adopt for your own need.
2. A sales forecast plan (link): This entire page is very useful for the purpose of building and forecasting a sales plan, and can help you build up the figures required for doing your revenue forecasting.
3. Product Development Schedule Template Syn1.0 ($ 59) (link): a detailed list of activities and tasks for planning and managing product concept, design, development, test, launch and release.
4. VSD Template (link): Has a lot of steps that you should be logically taking if you are defining a new product
5. SWOT analysis template (link): Strengths, Weaknesses, Opportunities, Threats - a useful analysis that you should carry out for your new product development
6. Marketing Plan Template (link): Very useful for preparing the marketing and product management plan that will help you to prepare for the presentation
7. Product development planning from Microsoft (link): This template outlines a strategic approach for product development. By working with your business position in the marketplace, establishing product infrastructure, and leveraging knowledge of your targets and competitors, this template establishes a framework to begin product development. This is a Microsoft Project 2007 template.
8. Individual templates for efficiency during the product planning process (link)
9. Alta Advisor product creation plan (link): Helps you collect the information you will need to prepare a proper plan, as well as provide you many points that you can use to present to management.
10. Product Manager's Toolkit (link): Many tools such as MRD, PRD, Business Case, etc, all that will help you prepare for a new product plan.
11. Product Definition and Launch Plan Template (link): A Word document that acts as an MRD
12. Six free templates (link): Product Management Life Cycle Model, Product Strategy Outline, Ten Step Marketing Plan Outline, Marketing Dictionary, Guideline for Managing Product Team Meetings, Strategic Marketing Planning Model
13. Sample Marketing Requirements Document (MRD) (link)
Posted by Ashish Agarwal at 9/22/2008 09:32:00 PM 0 comments
Labels: Management, MRD, PRD, Product, Requirements, Specification
Subscribe by Email |
|
Sunday, September 21, 2008
Product Development - some of the project planning related steps - Requirements gathering contd..
Typically, when a new project or new version is being conceptualized, the actual product development is not kicked off until senior management / executives have had a chance to review the requirements identified for the release and get a sense of assurance that the features planned for the release fulfil certain conditions. This typically is handled in a kickoff meeting where the set of requirements / features is presented to the management, and where management can quiz the product managers with a series of questions until they are satisfied. It happens many times that the product management team is sent back to the drawing board with some feature changes or tweaks to the existing feature. This meeting is typically called a Product Requirements Review, and is managed by the Product Management team. It goes without saying that the more important the product to the overall revenue and future of the company, the more involved is this meeting. The questions are harder, the research and data needs to be more thoroughly done, and the presentation needs to be reviewer before-hand so that the flow of the presentation can be maintained.
What are the items that are typically validated in such a meeting:
1. These additional features are such that customers must be ready to pay for the product version or the new product
2. Management also needs to see other features that were planned but not being executed in this cycle because of time constraints (the second tier of features)
3. Management needs to get a preview of existing competitor features as well as new proposed features
4. Data that is available from the field (including existing customer reports - enhancement needs and complaints) both help in this kind of presentation, and give a perspective of how current customer satisfaction levels are
5. Pricing for the products (including enhanced pricing, update pricing for customers wanting to upgrade from earlier versions, volume pricing) needs to be decided and validated. This will include strategies for trials (where users can try either a limited set of features without the software timing out, or try out the full feature set for a limited period of time)
6. Deals with partners are also presented in such a meeting. This includes the pricing if bundling is done with computer hardware OEM's, or packaged with installers of other products (example, when anti-virus trial downloads are available along with Yahoo / Google toolbar downloads, and Google toolbar is also downloaded with other products)
7. The timing of the release is also an important factor in this whole discussion; given that product release timing is extremely important - after all, if you have a consumer game product, it would need to be released so that it can catch the Christmas buying season.
Posted by Ashish Agarwal at 9/21/2008 02:42:00 PM 0 comments
Labels: Development, Processes, Product, Requirements
Subscribe by Email |
|
Tuesday, September 16, 2008
Product Development - some of the project planning related steps - Requirements gathering
Most people in the software industry have heard of the Software Development Life Cycle (SDLC). In the full form that you read in most literature, it has a number of steps, and may seem a bit long-winded. However, the practical version of the SDLC is to the effect of:
Requirements -> Design -> Development -> Testing & Bug fixing -> Alpha / Beta Release -> Acceptance testing -> Release
There are more complications in this process, with many additional steps such as the resource estimation that happens during requirements and design, development of test cases that happens during the design and development phase, generation of the software documentation, etc.
Now, when you step into the world of product building (whether to build a new product, or to generate a new version of an existing product), there is a slightly modified version of the SDLC (typically called the Product Development Life Cycle, or PDLC). I am not going to be covering the PDLC in a complete form in this post, instead, in the next series of posts, will cover a practical experience of what happens during the Product Development Life Cycle. If you see things happening differently in your company, please share through the comments.
Suppose you are the project / program manager for a product that has been released before, and you are doing new version planning, here are some of the initial activities that you should be considering:
- User feedback collection: Never neglect that among the changes you need to make in the newer version will be modifications of existing features. Sometimes, the focus of the team is on building new features, but if existing features do not work well for customers, then the problems with these features need to be studied, and then the team needs to discuss as to which of the problems should be taken and corresponding features modified
- Once these modifications are identified, there needs to be some detailing of some of these changes along with further concretisation of what would be a new desired workflow (this can be done through usability testing with the desired audience)
- This was it for changes to existing features. In addition (or actually the main work) is to decide which are the main features that need to be taken. Sources for generating this list of features includes customer feedback from sales and marketing teams, bug reports / feature requests filed by customers on user forums, in product reviews and so on.
- Another great source for generating a list of new features is by looking at features that are available with customers, and deciding on the list of features that are must-haves
- Another source is the list of features generated by reviewers when you are showing your product for review. Typically, they will give you feedback for the features, and also ask you about features that are not present in your product. Many of these reviewers have a feel for the pulse of the customer base, and you would be wise to review the features that they are asking for
- Sometimes, the feature list needs to be generated internally. You need to create a new market, or present a feature that will make your competitors feel that they are way behind, and one good way is to get the marketing teams, as well as the product teams together in a brainstorming session and thrash out possible features that need to be incorporated.
Enough for this post, the next post in this series with continue with requirements generation and further analysis..
Posted by Ashish Agarwal at 9/16/2008 10:57:00 PM 0 comments
Labels: Product, Requirements
Subscribe by Email |
|
Saturday, September 6, 2008
Function Point Analysis Resources / Tools
Function Point Analysis is one of the most important methods in use right now for estimating the 'size' of a project, the aim being to break down a project into smaller parts that can be used as the basis for estimation. However, it can be complex, and before somebody rushes into this area, it is advised that they read more on this area. So here are some resources and tools that should help (if you know more that should help, please add them in the comments):
International Function Point Users Group: The mission of IFPUG is to be a recognized leader in promoting and encouraging the effective management of application software development and maintenance activities through the use of Function Point Analysis and other software measurement techniques. IFPUG endorses FPA as its standard methodology for software sizing. In support of this, IFPUG maintains the Function Point Counting Practices Manual, the recognized industry standard for FPA.
Bulletin Board for discussions on the above site. There are a large number of discussions on this forum.
Newsgroup on FPA. Access via Google.
An older FAQ on royceedwards.com
Function Point Analysis on Wikipedia
Function Points Analysis on Total Metrics: More resources including such material - How to use FP Metrics in a Project, Which Functional Size Method to Use?, etc
Cosmic-ISO: The New Generation of Functional Size Measurement Methods
Metrics Documentation Toolkit on totalmetrics.com
Free function point manual (111 pages of goodness in a PDF file)
Online Function Point training on softwaremetrics.com
Article explaining levels of FP counting
Tools page at usc.edu
PQMPlus tool at Q/P Management Group
FP resources at devdaily.com
Tool: Metre v2.3
Tool: Function Point WORKBENCH. Available at this site.
Tool: Function Point Analysis GUI Tool: Read / Download.
Posted by Ashish Agarwal at 9/06/2008 09:10:00 PM 0 comments
Labels: Articles, Resources, Tools
Subscribe by Email |
|
Thursday, August 28, 2008
Weakness of Function Point Analysis
Function Point Analysis is seen as a very important and useful technique for requirements estimation, and for numerous other benefits (see previous post for more details). However, even such a famous method has its detractors, with a number of people / studies pointing out issues with the technique. Here are some of these issues / weaknesses / problems:
- FPA is seen as not being fully suited for object oriented work with an objection that the core of the technique, function points cannot be reasonably counted for object-oriented requirements specifications. The problems are that several constructs of object oriented specifications representation can be interpreted in various ways in the spirit of FPA, depending on the context.
- Function point counts are affected by project size; ideally Function Points should not be affected by project size since they measure each function, but this does not work out in actual practise
- Function Point Counting techniques have been found to be not easy to apply to systems with very complex internal processing or massively distributed systems
- Difficult to define logical files from physical files
- The validity of the weights that were present in the initial technique that the founder of FPA, Albrecht, set up as well as the consistency of their application has been challenged
- Different companies do calculate function points slightly different (actually depending on the process and people who do the actual Function Counts), making intercompany comparisons questionable and negating one of the core benefits of having standardised Function Counts
- There is a conflict in the usage of FPA for project size purposes, this conflict being with another standard measure of counting - The number of lines of code is the traditional way of gauging application size and is claimed to be still relevant because it measures what software developers actually do, that is, write lines of code. At best it can be used along with Function Counts.
- Doing FPA means that you are depending on converting back the available information to actually do essentially the same thing as requirements specification, and should end up with the same types of errors. This process is touted as a big error prone area.
- Function points, and many other software metrics, have been criticized as adding little value relative to the cost and complexity of the effort which are major factors in decision making
- The effort in computing function points has some base errors inherent because much of the variance in software cost estimates are not considered (such as business changes, scope changes, unplanned resource constraints or reprioritizations, etc.).
- Function points don't solve the problems of team variation, programming tool variation, type of application, etc.
- FP was originally designed to be applied to business information systems applications. So, the data dimension was emphasized and as a result, FPA was inadequate for many engineering and embedded systems.
- Another problem, this one dealing with the technical process of FPA comes up when assessing the size of a system in unadjusted function points (UFPs). The
classification of all system component types as simple, average and complex is not sufficient for all needs.
- Counting FP's is a major factor, requiring the presence of a skilled counter. Many companies get this work done by people not having the desired skill level (this happens for other tools as well, but counting correct FP's is critical to the whole system)
Inspite of these problems, FPA is a very useful tool, and probably a very good fitment for doing estimation.
Posted by Ashish Agarwal at 8/28/2008 11:20:00 PM 0 comments
Labels: Defect, Estimation, Pitfalls, Problems, Processes, Software, Techniques
Subscribe by Email |
|
Sunday, August 17, 2008
Advantages / Benefits of Function Point Analysis
Function Point Analysis is seen as a significantly important took / process for doing estimation. But what exactly are the benefits that you can get via this process, and why is this something that people are willing to pay good money to learn ? Reading ahead, you will see reasons outlined (and if you are willing to share your own story of how FPA worked for you, please share via the comments).
Function Point Analysis lets you have the ability to do a reasonably accurate estimate of the most important estimation metric:
- The project cost
- The duration of the project
- The number of resources required to staff the project
In addition, there are more metrics that are required for a project, and FPA helps in understanding them much better:
- You can get a good idea of the project defect rate
- By calculating the number of Function Points (FP's), you can calculate the cost per FP
- Similar calculations will get you the FP's per hour
- If you are experimenting with using new tools, using FP's provides you an easy way to determine the productivity benefits of using new or different tools
Some of the other benefits of using Function Point Analysis are listed below:
- By measuring FP's, you can guard against increase in scope (function creep)
- Measurement of performance indicators enables benchmarking against other teams and projects
- Project Scoping: By breaking the project into several small functions, you get the advantage of being able to easily convey the scope of the application to the user
measured in function points.
- Assessing Replacement Impact: If existing applications have to be replaced by a similar application, you know the desired FP (and the size of current project).
- Assessing Replacement Cost: Once you know the replacement impact, it is easier to derive the replacement cost, since some standard figures are available for cost per FP
- Testing Resources: With the help of the FP's calculated during the project, it is easier to calculate the areas that are more complex and will require more testing effort.
In summary, Function points work well as a measurement tool in part because, they are relatively small units and can be measured easily. These measurements in turn can be used to derive a number of project related metrics.
Posted by Ashish Agarwal at 8/17/2008 09:42:00 PM 0 comments
Labels: Benefits, Estimation, Techniques
Subscribe by Email |
|
Effort Estimation Technique: Function Point Analysis (Part 2)
In the previous post, you learned about the 5 functional components; now, in addition to these, there are 2 adjustment factors that need to be employed during Functional Point Analysis. These are:
Functional Complexity: As the name states, you need to consider the functional complexity for each unique function. Functional Complexity is determined based on the combination of data groupings and data elements of a particular function. Then, the data elements and unique groupings are counted, and based on a complexity matrix, these functions will be rated as high, medium, or low complexity (and the complexity matrix is unique for each of the 5 different types of functional components). Once this analysis is done, you get a total called as Unadjusted Function Point count.
Value Adjustment Factor: This is the second adjustment factor, and takes into account the system's operational and technical characteristics. There are a total of 14 questions / factors that need to be used for this calculation:
# Data Communication: The data and control information used in the application are sent or received over communication facilities
# Distributed data processing: Distributed data or processing functions are a characteristic of the application within the application boundary
# Performance: Application performance objectives, stated or approved by the user, in either response or throughput, influence (or will influence) the design, development, installation and support of the application.
# Heavily used configuration: How heavily used is the current hardware platform where the application will be executed?
# Transaction rate: How frequently are transactions executed daily, weekly, monthly, etc.?
# Online data entry: What percentage of the information is entered On-Line?
# End user efficiency: The on-line functions provided emphasize a design for end-user efficiency.
# Online update: The application provides on-line update for the internal logical files.
# Complex processing: Does the application have extensive logical or mathematical processing?
# Reusability: Was the application developed to meet one or many user’s needs?
# Installation ease: How difficult is conversion and installation?
# Operational ease: How effective and/or automated are start-up, back-up, and recovery procedures?
# Multiple sites: The application has been specifically designed, developed and supported to be installed at multiple sites for multiple organizations.
# Facilitate change: The application has been specifically designed, developed and supported to facilitate change.
One of the major sub-process in the FPA analysis is about counting of the Functional Points. There are several approaches used to count function points. The five major steps in the process of counting FPs are as follows:
1. Determine the type of count.
2. Identify the scope and boundary of the count.
3. Determine the unadjusted FP count.
4. Determine the Value Adjustment Factor.
5. Calculate the Adjusted FP Count.
Posted by Ashish Agarwal at 8/17/2008 08:27:00 PM 0 comments
Labels: Development, Estimation, Processes, Software, Techniques
Subscribe by Email |
|
Effort Estimation Technique: Function Point Analysis (Part 1)
The Function Point Analysis technique was developed during the late seventies by IBM, which commissioned one of its employees, Allan Albrecht to develop this technique. In the early eighties, this technique was refined, and then a new organization, International Function Point Users Group (IFPUG), was founded to take the Function Point Analysis technique forward; while keeping the core spirit behind what Albrecht had proposed.
Function Point Analysis (FPA) is a sizing measure that uses a system of sizing as per clear business significance. Function Point Analysis is a structured technique of classifying components of a system. One of the primary goals of Function Point Analysis is to evaluate a system's capabilities from a user's point of view. It is a method used to break systems down into smaller components so that they can be better understood and analyzed. The main objectives of FPA are:
1. Measure software by quantifying the functionality requested by and provided to the customer.
2. Measure software development and maintenance independently of technology used for implementation.
3. Measure software development and maintenance consistently across all projects and organizations.
FPA uses functional, logical entities such as outputs, inputs, and inquiries that tend to relate more closely to the functions (that is, business needs) performed by the software as compared to other measures, such as lines of code. FPA has become generally accepted as an effective way to
* estimate a software project's size (and in part, duration)
* establish productivity rates in function points per hour
* normalize the comparison of software modules
* evaluate support requirements
* estimate system change costs
In the world of Function Point Analysis, systems are divided into five large classes and general system characteristics. The first three classes or components are External Inputs, External Outputs and External Inquires. Now, each of these components transact against files and are hence called transactions. The next two classes, Internal Logical Files and External Interface Files are where data is stored that is combined to form logical information. The general system characteristics assess the general functionality of the system.
Details of each one of these:
1. Data Functions → Internal Logical Files: This contains logically related data that resides entirely within the application’s boundary and is maintained through external inputs.
2. Data Functions → External Interface Files: The second Data Function a system provides an end user is also related to logical groupings of data. In this case the user is not responsible for maintaining the data. The data resides in another system and is maintained by another user or system.
3. Transaction Functions → External Inputs: This is an elementary process in which data crosses the boundary from outside to inside. This data may come from a data input screen or another application. The data may be used to maintain one or more internal logical files. The data can be either control information or business information.
4. Transaction Functions → External Outputs: an elementary process in which derived data passes across the boundary from inside to outside. The data creates reports or output files sent to other applications.
5. Transaction Functions → External Inquiries: The final capability provided to users through a computerized system addresses the requirement to select and display specific data from files. To accomplish this a user inputs selection information that is used to retrieve data that meets the specific criteria. In this situation there is no manipulation of the data. It is a direct retrieval of information contained on the files.
Posted by Ashish Agarwal at 8/17/2008 06:12:00 PM 0 comments
Labels: Development, Engineering, Estimation, Processes, Techniques
Subscribe by Email |
|
Sunday, August 10, 2008
Effort estimation techniques
A new project is being thought of (or even an extension of the current product). As an example, say you want to build a new shopping cart application for your website. In order to get started, besides knowing exactly what you want to build (the product details), you also need to start estimating how many people you will need and how long it will take. And this process of estimation is probably one of the more difficult parts of software development, something that can make or break a project. Suppose your estimation goes wrong, and you estimate too little effort for the project. In such a case, you will be forced to make compromises / overwork the people involved / deliver a low quality work. All of these are major factors that could cause the project to fail. Fortunately, there are a number of different effort estimation techniques that could be employed (this post just has brief descriptions, details for each of these will be covered in future posts):
# Function Point Analysis (FPA): A way to break down the work into units to express the amount of business function an information system provides to a user
# Parametric Estimating / Estimation Theory: Estimating the value based on measured / empirical data
# Wideband Delphi: A estimation theory that works on the basis of developing a consensus for the estimation (typically based on a series of group meetings)
# Cocomo: An algorithmic estimation model, using a regression formula derived from historical data and current project details
# Putnam Model: Taking data from the project (such as effort and size and fitting these data to a curve). This is an empirical model.
# SEER-SEM Parametric Estimation of Effort, Schedule, Cost, Risk.: System Evaluation and Estimation of Resources - Software Estimating Model. This model is based on a mixture of statistics and algorithms
# Proxy-based estimating (PROBE) (from the Personal Software Process): Proxy based estimating produces size estimates based on previous developments in similar application domains
# The Planning Game (from Extreme Programming): This is the process that depends on release planning and iteration planning (inside iteration planning, requirements are broken down into tasks and tasks are estimated by the programmers)
# Program Evaluation and Review Technique (PERT): PERT is intended for very large-scale, one-time, complex, non-routine projects, and has a great advantage of being able to incorporate uncertainty
# Analysis Effort method: This method is best suited to producing initial estimates for the length of a job based on a known time duration for preparing a specification
# TruePlanning Software Model: Parametric model that estimates the scope, cost, effort and schedule for software projects
# Work Breakdown Structure: In this technique, all the individual tasks are laid out with their estimates and description, and they can be summed to display the total effort
Posted by Ashish Agarwal at 8/10/2008 12:29:00 PM 0 comments
Labels: Algorithm, Development, Engineering, Models, Resources, Software, Techniques
Subscribe by Email |
|
Tuesday, August 5, 2008
Doing something new in the same company
Suppose you are working for a software company that makes software products, and are getting somewhat bored of the same product that you have been working on for some time now (if you have been with the company for some time now, there are good chances that you will be working for version X.y of the same product, and would have felt that you have not being doing anything new for quite some time now). What can you realistically do ? There are several options that you should consider (some of them are possibilities depending on the company you are working for in terms of size, policies, etc).
- You could ask to move to another group. Say you are working for a particular product as a developer, you could ask to move to another product. Typically, most companies are loath to lose good guys, and would consider your request
- You could change your sub-role within the same function. Say you are the engineering manager handling a team, and feel that you have been out of touch for quite some time, then you could request that you no longer be a manager, and instead move to a non-managerial role. Good companies have an equally rewarding career path for people who want to remain in non-managerial positions
- You could change your function. At times, a person finally realizes that his role is not longer attractive enough. So you could change from being an engineer to a quality role, or a product management role, or any other such role. For moving to a new role, you need to have some background skills as well as a base in the product, and you would already have a base; so that is half the problem solved. If you are looking at such an option, then you should take care that you keep a lookout for new positions that become open.
- Start something new. This is something that is a bit more difficult. Most companies of a certain size and above have a policy of allowing employees that have new ideas to work their way through these ideas. So, they can actually get support to develop their ideas, and if the idea has market potential, they could actually lead the effort to get this idea into a new market
- Take some time off. Many companies have this policy where they would allow people to take time off, and depending on the amount of time, this could be paid or unpaid leave (without harming their career in any way)
Finally, if nothing works, and you are feeling bored / frustrated, then it is time to quit your job and work elsewhere.
Friday, August 1, 2008
A great resource: Joel on Software
For a long time now, I have been reading the content posted by Joel Spolsky on his blog. I like his writing enough to have taken an email subscription to his feed so that I get notified whenever he writes a new article. His feed typically provides the opening paragraph of the content, and then I go to the Blog to read the rest. And what is the link of the Blog, you may ask ? Well, here it is (link)
As Joel says, "This is Joel on Software, where I've been ranting about software development, management, business, and the Internet (ack) since 2000. Rest assured, however, that this isn't one of those dreaded blogs about blogging." He spent some time in various jobs including Microsoft Consulting, and then chucked it all up, and went onto found his own company called Fog Creek Software in September, 2000.
I like a lot of his articles where he talks about design, usability, and sundry other topics. The articles are not long or preachy, but are instead quick reads and, if you are in the software business, you should consider them as part of an ongoing education.
Joel has written books as well, and this is his latest one:
Posted by Ashish Agarwal at 8/01/2008 04:50:00 PM 0 comments
Labels: Articles, Blogs, Design, Resources, Usability
Subscribe by Email |
|
Tuesday, July 29, 2008
Resources for usability testing
Previous articles covered some details about usability testing. I hope by now I have been able to cover some details of what usability testing is, why it is important, and some tips / issues that one needs to cover. However, there is always more that you can explore, read about, and hence you should have an idea of some of the secrets to having a good usability testing program.
Infodesign.com.au (link)
Accessibility forum (link)
Userfocus.co.uk (link)
Web Site Usability (link)
Web usability (link)
Usability.gov (link)
Doug Ward's website (link)
Usability testing (link)
deyalexander.com (link)
Column by Jakob Nielsen (link)
Posted by Ashish Agarwal at 7/29/2008 11:18:00 PM 0 comments
Labels: Document, Resources, Techniques, Testing, Usability
Subscribe by Email |
|
An example of unrealistic expectations in the service industry
This is a true example from around 6 years back when I was working for an IT software solutions provider (the firm did software projects for different customers). This was a decent sized company that had something like 12000 people on the rolls, doing everything from development to testing to requirements analysis, and so on. I was more into the area of a business analyst, translating the requirements document into a form that the developers working on the project would understand.
This was a new project with a new medium sized bank in the Midwest, and the hope was that we would be able to do this project well enough and give them a system that would work so well for them that they would continue with the company and be the start of a long and serious (and profitable) relationship. Sounds good, right ? Well, read on.
Around this time, our company, that was a publicly listed company was getting on just like the other service companies of that time, doing okay, but not generating great figures. Management was getting hit by analysts, and passed on a directive that every project needs to meet the company defined margin. Exceptions only when pleaded before the executive committee, and not otherwise. Implicit was the expectation that anybody who does a project that does not promise enough margin would need to explain the project.
Now, since our project was with a new customer with whom we had high hopes for the future, we could not charge our expected rates; after all, why would the customer then select us ? So, our account manager along with the Vice-President of the unit went ahead and quoted a rate that was atleast 20% lower (getting fewer people assigned to the project than necessary). Guess what ? Pretty soon, the strains and missing people started to show.
Ego also plays a part. For a Vice-President to go before the committee and plead for more money (a reduction in margin) would reflect adversely. The members of the committee, who might be expected to provide an experience of being able to handle these kind of situations and offer some latitude did not do so since they were never offered this project for review. Pretty soon, somebody senior in the team had the bright idea that weekends could be converted into work hours (maybe 1 weekend in 3 could be off), and this idea was implemented with gusto.
You can guess the rest. People from outside the project did not want to join, quality reviews of the project were hesitant because of the many exceptions, and eventually the customer could make out that the quality was not as desired. Project over, account over, and pretty soon the project manager and other senior team members quit and went to other companies.
This was a disaster caused by the management reacting adversely to poor numbers, and unwilling to exercise the due diligence in doing a project (after all, the first criteria for a project should be to make it successful).
How many of you have similar experiences ?
Posted by Ashish Agarwal at 7/29/2008 10:18:00 PM 0 comments
Labels: Avoidance, Contract, Problems, Quality, Service
Subscribe by Email |
|
Tuesday, July 8, 2008
Hiring a consultant for generating your requirements: Part 2 of an external article
The previous article presented the DragonPoint.com article outlining 5 essential steps that you need to take when you have people hired to do requirements analysis for a project. Well, here is the link to the part 2 of this article that outlines tips 6 - 10 for the requirements gathering process (link). These next 5 tips are:
6. Make resources available: You need to budget for making sure that the requirements capturing team can access the current system and employees.
7. Include the employees who actually use the system: It is critical that the requirements gathering team get access to actual users who use the system, not only to the management people who drive the project.
8. Let your employees know their input is important: Make sure that the employees working with the requirements team understand that they need to help the team. At the same time, employees who are working with the team can provide an impression of the competence of the requirements team (are they asking the right questions, are they focused on the current processes, etc)
9. Remember why you hired the consultant: Do not take anything for granted. Explain your process in full detail.
10. Take ownership of the project: Hopefully not necessary to explain this. The project is important, and you need to make sure that it gets full importance.
Posted by Ashish Agarwal at 7/08/2008 10:47:00 PM 0 comments
Labels: Consultant, Plan, Processes, Requirements
Subscribe by Email |
|
Requirements gathering: DragonPoint Article (#1)
Requirements gathering for a software project (that you really need to expand or maintain your business) is a tough job; the toughness is because it is not an exact science. It depends on the type of project, depends on the nature of the people who will enumerate the requirements, and it depends on the capability of the team that does the actual requirements gathering. In such a scenario, it makes sense to read as much as possible about the requirements process so that one is aware of the best practices, about different use cases in which the process has worked, and so on.
A lot of companies that want to get hire consultants for this critical stage, and here is a great first part of an article at Dragon Point that talks about 5 steps (out of a total of 10) to become better at requirements gathering:
Any needs not identified during the requirements capture stage will result in scope creep. According to Suzie DeBusk, President of DragonPoint, the most effective way to minimize scope creep is to allocate 30% of the time spent on a project to requirements capture, design, and review. How does requirements capture reduce scope creep?
The requirements capture stage is similar to the planning phase of a construction project. If the client and architect do not communicate effectively, the blueprints will not meet the client's needs. And, depending on the size and scope of the discrepancies, this error in communication can result in costly rework as the project evolves.
As per the article, the following are the main points:
1. Be prepared for the initial consultation.
2. Remember your current system
3. Communicate.
4. Look for listening.
5. Listen for insightful questions that demonstrate you and your consultant share common goals.
Posted by Ashish Agarwal at 7/08/2008 05:59:00 PM 0 comments
Labels: Consultant, Plan, Processes, Requirements
Subscribe by Email |
|
Saturday, June 28, 2008
Guidelines for usability testing
When we were going in for usability testing, I was part of the team that would evaluate the results from the usability testing, and it was important for me to understand more about the usability testing process. Besides understanding the need for usability testing and the process, it was also important for the team to understand what the guidelines were for usability testing. They would also help understand how usability testing works. Some of the guidelines (both from reading the literature on this subject, and also from overall study of the process in practise) that I learnt were:
1. Deciding the target audience for your usability testing: Given that the outcome of the usability testing will help in determining changes to your design strategy, it is absolutely essential that the usability testing be done among people who are good representatives of the target subject; this means that the selection should be done carefully. Avoid the temptation to cut corners, or to select neighbors who seem to represent the target segment. Devise a set of queries that will help determine if the selected people actually are good representatives; similarly, use experts to help you get good target subjects. And, other than a few cases, don't let them know the name of the company - it may cause bias to appear in their results.
1. a) Different users may specialize is separate workflows, for example, if you want to get testing done of a new shopping site catering to working professionals, you should design an appropriate query form. Review the answers so that you can get an idea of which area a user would move more towards; this may help in deciding the exact set of users.
2. Before actually starting the testing: Remember, your tests will only be good when the people taking part are comfortable with the whole process. They need to feel that the environment in which the test is taking place is similar to that of their home or work environment. And of course, stay easy on the legalese - you may want to make sure that your in-process development is safe and want them to sign NDA's, but if you may it full of complicated legal terms, they are likely to get confused. If your office is not exactly very accessible, consider doing the testing in a more convenient location.
3. Starting the usability testing: Don't plunge the users directly into the testing process. Get talking with them, explain what the website is about and what the URL is; get some initial feedbacks about what they expect from such a site. If they mention some phrases or some other such term, then it is good to understand these statements or terms. Getting some words of polite conversation in makes them more comfortable.
4. Decide what you want to get tested: When deciding to select tasks for review by users, it is absolutely essential that you dump the notion about favorite sections of the site. You may have added a great new section that was very difficult to develop, but if it is not critical to the success of the site, then it is not a high priority to get an evaluation done. You should select tasks that are critical for the success of the site.
5. Scenarios are always better for such evaluations: Again, you need to talk in the language of the customer. You may have had great internal debate on the naming of features, but if you asking customers to evaluate some flows, then you should:
- Ask them to try out workflows / use cases (for example, you need to find some white shoes for your kid, and pay for them through the card that you normally use)
- Use simple language (avoid more technical words such as payment processing, and instead use phrases such as payment options)
- Set tasks that have a logical conclusion
6. The actual task execution: It is almost a core logic of usability testing that people should not be tasked very heavily during the process of usability testing. Get them to do one task at a time, and focus on their responses during the execution of the task. In a lot of cases, users may have to be given inputs during the course of the task (say, you want them to test out the convenience of posting videos to YouTube); in such cases, make sure that they have the equipment needed for the task (in this case, they either have a sample of home videos, or they have a camera readily available)
7. The participant is not at fault: Sometimes, you get users struggling during the usability test, or they get stuck at places where you would think that things are very simple. Remember, all their feedback is important, and if they are not able to do some tasks or part of tasks, then it is most likely a reflection of the task rather than any inherent inabilities on the part of the contestants. Further, if they are confused by something, or they have to make a choice, don't guide them down some path (you would most likely introduce a bias if doing so). If you are asked a direct question, then reply, but not venture an opinion.
8. Don't get distractions into place: Once the user has started, minimise distractions. Prevent people from coming or going in the location, and this includes a lot of traffic in front of the room where the test is happening. If people need to see, then they should see this on a video conference or some other facility. Test subjects can get conscious if there are too many there.
9. The user has completed their testing: If the user is done, then you need to gather as much information as possible. Ask the user about their impressions, about what worked well and what did not. Ask about what they feel could be done to improve things, and whether what they saw was how they would have done things. This could involve also asking them about what they recall about the software or the site - this helps in highlighting the parts that remain in the minds of the tester.
10. Go through the recording of the tester interaction; this helps in determining where the tester was able to move fast, where there was hesitation, and most importantly, where the tester expected to find something and did not.
11. If you are still not clear after going through with a number of testers, use more ! Your product success depends on getting the flow right, and if this means that you need more usability testing done, then so be it.
Posted by Ashish Agarwal at 6/28/2008 05:16:00 PM 0 comments
Labels: Processes, Testing, Usability
Subscribe by Email |
|
Saturday, June 7, 2008
Usability testing tools
Usability testing is a part of the development life cycle that is pretty critical. It is part of the series of steps (along with user testing and beta testing) that validate whether the product (and the features) are actually usable by the actual end users; feedback from this stage can make a difference between success and failure of the software / website. But such a process can only be useful it is done effectively; if done wrongly, it can prove to be either useless or provide wrong results.
Here is a smattering of tools that can be of help if you are in the business of being involved in usability testing:
1. Usability Test Data Logger tool v5.0 (link to site)
Some features:
# Cross-platform: Datalogger is a PC- or Macintosh-compatible Microsoft Excel file (requires Microsoft Excel to run).
# Customisable: You can enter participant details, task names, task order, pre- and post-test interview questions and include your own satisfaction questionnaire.
# Captures quantitative data: The spreadsheet includes preset task completion scores and includes a built-in timer to record time-on-task.
# Captures qualitative data: Allows data entry of qualitative observations for each participant and each task.
# Provides real-time data analysis: Automatically generates charts illustrating task completion, time-on-task and user satisfaction with the product.
2. Morae Usability Testing for Software and Web Sites (link to site)
From the website:
Morae gives you the tools to:
* Instantly calculate and graph standard usability measurements, so you can focus on understanding results
* Visualize important results in ways that make them more understandable and meaningful
* Present results persuasively and professionally
Morae bundle can be bought for $1495 (link)
3. A website that explains how to use Macromedia Director as a Usability testing tool (link to article)
From the website:
While Director will not eliminate standard development environments or programming languages, it will enhance the prototyping and usability testing experience by allowing developers to gather feedback from prospective clients and users early in product development. Early prototyping will allow developers to identify and fix defects early in development.
4. QUIS: The Questionnaire for User Interaction Satisfaction (link to site)
From the website:
The purpose of the questionnaire is to:
1. guide in the design or redesign of systems,
2. give managers a tool for assessing potential areas of system improvement,
3. provide researchers with a validated instrument for conducting comparative evaluations, and
4. serve as a test instrument in usability labs. Validation studies continue to be run. It was recently shown that mean ratings are virtually the same for paper versus computer versions of the QUIS, but the computer version elicits more and longer open-ended comments.
5. Rational Policy Tester Accessibility Edition (link to site)
From the website:
The Accessibility Edition helps ensure website user accessibility by monitoring for over 170 accessibility checks. It helps determine the site's level of compliance with government standards and displays results in user-friendly dashboards and reports.
* Improves visitor experience by exposing usability issues that may drive visitors away
* Facilitates compliance with federally-regulated guidelines and accessibility best practices
* Enlarges your market opportunity: over 10 percent of the online population has a disability (750 million people worldwide, 55 million Americans)
* Operating systems supported: Windows
6. Serco service (link to site)
They have a service that covers the following stages:
Planning and Strategy
User needs
Defining concepts
Usability evaluation
7. Web accessibility toolbar (link to site)
From website:
The Web Accessibility Toolbar has been developed to aid manual examination of web pages for a variety of aspects of accessibility. It consists of a range of functions that:
* identify components of a web page
* facilitate the use of 3rd party online applications
* simulate user experiences
* provide links to references and additional resources
8. WAVE 4.0 Beta (link to site)
From website:
WAVE is a free web accessibility evaluation tool provided by WebAIM. It is used to aid humans in the web accessibility evaluation process. Rather than providing a complex technical report, WAVE shows the original web page with embedded icons and indicators that reveal the accessibility information within your page.
9. Readability Test (link to site)
The website provides a service that helps in determining how readable a site is.
From the website:
Gunning Fog, Flesch Reading Ease, and Flesch-Kincaid are reading level algorithms that can be helpful in determining how readable your content is. Reading level algorithms only provide a rough guide, as they tend to reward short sentences made up of short words. Whilst they're rough guides, they can give a useful indication as to whether you've pitched your content at the right level for your intended audience.
If you have feedback on the above, or other tools that have been useful for you, please comment.
Posted by Ashish Agarwal at 6/07/2008 03:29:00 PM 1 comments
Labels: Testing, Tools, Usability
Subscribe by Email |
|