Subscribe by Email

Saturday, June 28, 2008

Guidelines for usability testing

When we were going in for usability testing, I was part of the team that would evaluate the results from the usability testing, and it was important for me to understand more about the usability testing process. Besides understanding the need for usability testing and the process, it was also important for the team to understand what the guidelines were for usability testing. They would also help understand how usability testing works. Some of the guidelines (both from reading the literature on this subject, and also from overall study of the process in practise) that I learnt were:

1. Deciding the target audience for your usability testing: Given that the outcome of the usability testing will help in determining changes to your design strategy, it is absolutely essential that the usability testing be done among people who are good representatives of the target subject; this means that the selection should be done carefully. Avoid the temptation to cut corners, or to select neighbors who seem to represent the target segment. Devise a set of queries that will help determine if the selected people actually are good representatives; similarly, use experts to help you get good target subjects. And, other than a few cases, don't let them know the name of the company - it may cause bias to appear in their results.

1. a) Different users may specialize is separate workflows, for example, if you want to get testing done of a new shopping site catering to working professionals, you should design an appropriate query form. Review the answers so that you can get an idea of which area a user would move more towards; this may help in deciding the exact set of users.

2. Before actually starting the testing: Remember, your tests will only be good when the people taking part are comfortable with the whole process. They need to feel that the environment in which the test is taking place is similar to that of their home or work environment. And of course, stay easy on the legalese - you may want to make sure that your in-process development is safe and want them to sign NDA's, but if you may it full of complicated legal terms, they are likely to get confused. If your office is not exactly very accessible, consider doing the testing in a more convenient location.

3. Starting the usability testing: Don't plunge the users directly into the testing process. Get talking with them, explain what the website is about and what the URL is; get some initial feedbacks about what they expect from such a site. If they mention some phrases or some other such term, then it is good to understand these statements or terms. Getting some words of polite conversation in makes them more comfortable.

4. Decide what you want to get tested: When deciding to select tasks for review by users, it is absolutely essential that you dump the notion about favorite sections of the site. You may have added a great new section that was very difficult to develop, but if it is not critical to the success of the site, then it is not a high priority to get an evaluation done. You should select tasks that are critical for the success of the site.

5. Scenarios are always better for such evaluations: Again, you need to talk in the language of the customer. You may have had great internal debate on the naming of features, but if you asking customers to evaluate some flows, then you should:
- Ask them to try out workflows / use cases (for example, you need to find some white shoes for your kid, and pay for them through the card that you normally use)
- Use simple language (avoid more technical words such as payment processing, and instead use phrases such as payment options)
- Set tasks that have a logical conclusion

6. The actual task execution: It is almost a core logic of usability testing that people should not be tasked very heavily during the process of usability testing. Get them to do one task at a time, and focus on their responses during the execution of the task. In a lot of cases, users may have to be given inputs during the course of the task (say, you want them to test out the convenience of posting videos to YouTube); in such cases, make sure that they have the equipment needed for the task (in this case, they either have a sample of home videos, or they have a camera readily available)

7. The participant is not at fault: Sometimes, you get users struggling during the usability test, or they get stuck at places where you would think that things are very simple. Remember, all their feedback is important, and if they are not able to do some tasks or part of tasks, then it is most likely a reflection of the task rather than any inherent inabilities on the part of the contestants. Further, if they are confused by something, or they have to make a choice, don't guide them down some path (you would most likely introduce a bias if doing so). If you are asked a direct question, then reply, but not venture an opinion.

8. Don't get distractions into place: Once the user has started, minimise distractions. Prevent people from coming or going in the location, and this includes a lot of traffic in front of the room where the test is happening. If people need to see, then they should see this on a video conference or some other facility. Test subjects can get conscious if there are too many there.

9. The user has completed their testing: If the user is done, then you need to gather as much information as possible. Ask the user about their impressions, about what worked well and what did not. Ask about what they feel could be done to improve things, and whether what they saw was how they would have done things. This could involve also asking them about what they recall about the software or the site - this helps in highlighting the parts that remain in the minds of the tester.

10. Go through the recording of the tester interaction; this helps in determining where the tester was able to move fast, where there was hesitation, and most importantly, where the tester expected to find something and did not.

11. If you are still not clear after going through with a number of testers, use more ! Your product success depends on getting the flow right, and if this means that you need more usability testing done, then so be it.

Saturday, June 7, 2008

Usability testing tools

Usability testing is a part of the development life cycle that is pretty critical. It is part of the series of steps (along with user testing and beta testing) that validate whether the product (and the features) are actually usable by the actual end users; feedback from this stage can make a difference between success and failure of the software / website. But such a process can only be useful it is done effectively; if done wrongly, it can prove to be either useless or provide wrong results.
Here is a smattering of tools that can be of help if you are in the business of being involved in usability testing:

1. Usability Test Data Logger tool v5.0 (link to site)
Some features:
# Cross-platform: Datalogger is a PC- or Macintosh-compatible Microsoft Excel file (requires Microsoft Excel to run).
# Customisable: You can enter participant details, task names, task order, pre- and post-test interview questions and include your own satisfaction questionnaire.
# Captures quantitative data: The spreadsheet includes preset task completion scores and includes a built-in timer to record time-on-task.
# Captures qualitative data: Allows data entry of qualitative observations for each participant and each task.
# Provides real-time data analysis: Automatically generates charts illustrating task completion, time-on-task and user satisfaction with the product.

2. Morae Usability Testing for Software and Web Sites (link to site)
From the website:
Morae gives you the tools to:
* Instantly calculate and graph standard usability measurements, so you can focus on understanding results
* Visualize important results in ways that make them more understandable and meaningful
* Present results persuasively and professionally
Morae bundle can be bought for $1495 (link)

3. A website that explains how to use Macromedia Director as a Usability testing tool (link to article)
From the website:
While Director will not eliminate standard development environments or programming languages, it will enhance the prototyping and usability testing experience by allowing developers to gather feedback from prospective clients and users early in product development. Early prototyping will allow developers to identify and fix defects early in development.

4. QUIS: The Questionnaire for User Interaction Satisfaction (link to site)
From the website:
The purpose of the questionnaire is to:
1. guide in the design or redesign of systems,
2. give managers a tool for assessing potential areas of system improvement,
3. provide researchers with a validated instrument for conducting comparative evaluations, and
4. serve as a test instrument in usability labs. Validation studies continue to be run. It was recently shown that mean ratings are virtually the same for paper versus computer versions of the QUIS, but the computer version elicits more and longer open-ended comments.

5. Rational Policy Tester Accessibility Edition (link to site)
From the website:
The Accessibility Edition helps ensure website user accessibility by monitoring for over 170 accessibility checks. It helps determine the site's level of compliance with government standards and displays results in user-friendly dashboards and reports.
* Improves visitor experience by exposing usability issues that may drive visitors away
* Facilitates compliance with federally-regulated guidelines and accessibility best practices
* Enlarges your market opportunity: over 10 percent of the online population has a disability (750 million people worldwide, 55 million Americans)
* Operating systems supported: Windows

6. Serco service (link to site)
They have a service that covers the following stages:
Planning and Strategy
User needs
Defining concepts
Usability evaluation

7. Web accessibility toolbar (link to site)
From website:
The Web Accessibility Toolbar has been developed to aid manual examination of web pages for a variety of aspects of accessibility. It consists of a range of functions that:
* identify components of a web page
* facilitate the use of 3rd party online applications
* simulate user experiences
* provide links to references and additional resources

8. WAVE 4.0 Beta (link to site)
From website:
WAVE is a free web accessibility evaluation tool provided by WebAIM. It is used to aid humans in the web accessibility evaluation process. Rather than providing a complex technical report, WAVE shows the original web page with embedded icons and indicators that reveal the accessibility information within your page.

9. Readability Test (link to site)
The website provides a service that helps in determining how readable a site is.
From the website:
Gunning Fog, Flesch Reading Ease, and Flesch-Kincaid are reading level algorithms that can be helpful in determining how readable your content is. Reading level algorithms only provide a rough guide, as they tend to reward short sentences made up of short words. Whilst they're rough guides, they can give a useful indication as to whether you've pitched your content at the right level for your intended audience.

If you have feedback on the above, or other tools that have been useful for you, please comment.

Usability testing methods

Usability testing can take the form of several methods and techniques (depending on the situation and need):

Cognitive Walkthrough: Performed continually during the development cycle, cognitive workflows try to evaluate the system from a user's thought process that help in decision making (these include the ability to reason as well as other factors such as memory load). This will help in terms of understanding how the system will seem like to a infrequent user. The input for such users could be in the form of paper workflows or through a working prototype. More details can be found at this link.

Focus Groups: Focus groups are actually a method used by marketing companies whereby a group of people from the target segment are brought together in a discussion type of format. A person from the product team acts as the moderator, and prepares a list of issues to be discussed. More details from this site (link).

GOMS: Goals, Operators, Methods, and Selection Rules; a slightly difficult mechanism to evaluate and model human task performance.

Prototyping: Prototyping is a very famous method for displaying a system for internal users and for external usability testers. Prototypes can range from paper models to actual almost working types of prototypes.

Walk-Around Review: This type of study is meant to harness the observations of other people in the company who are not part of the engineering team. The idea is to place paper mockups on the walls of the company along with space for comments so that people can write their comments; in addition, early builds can be provided to company people for them to use and provide comments on. Such a method can provide a lot of useful tips (and have an incidental benefit of providing greater visibility to your project).

Field Observation: Representatives of the product team go to the actual user workplaces and observe how they use the software in their daily lives (example, if a digital imaging application for professional photographers is being developed, then a good way is to have multiple days where people tag along with photographers and observe their usage of the application).

Task Analysis: # An analyst determines the user goals and tasks, then makes recommendations aimed at increasing efficiency and user-friendliness. The objective is to determine how the user actually feels comfortable in using software / sites.

Interviews/Observations: These are one-to-one discussions. Do a thorough discussion with them about what they do, and observe their usage at the same time. This will help in doing a realistic determination about their pattern of usage.

Usability Inspection: reviews a system based on a set of usability guidelines. Experts familiar with issues of usability in design perform the usability inspection.

User Testing: Users (sampled from the final target customer segment) use the final software / site. These can be final users from the client side (if the software is developed for a specific client), or can be a sample set of users. Usability experts study their interactions and figure out what are the paint points and improvements needed.

Tuesday, June 3, 2008

About usability testing and timing

Suppose you are in a tight development cycle. You have to deliver either a new product, or the next version of an existing product. Getting the features of a product right is always a touch task, given that there are a number of competing features that seem important, and prioritizing the features is something that is very important. This decides the priorities that the engineering team (the feature development team) will follow during the development cycle.
How is this priority actually decided ? If the company is in in the business of defining an absolutely new product that has not been conceptualized as yet, then getting some feedback from prospective customers is difficult; however, if there are already customers using an existing product (from the same company or a rival company), then it is absolutely essential that these users be polled for the features so that there is a good idea about the features that are most critical (it would also help to identify features that customers would be willing to pay a premium for).
Now consider that we are in the development phase of the project lifecycle, where the UI team works along with the engineering team to define the workflow for the feature. There is a lot of discussion around what the feature should be like (with a possibility of the discussion getting heated as a regular part of feature discussion), and eventually most people agree to what the feature should be like. The UI specs of the feature are drawn up and the feature implementation is based on the spec. At this point, everything may seem settled, but it is critically important that this final implementation be evaluated for usability issues. At this point, the team needs to find a set of people who would adequately represent the final set of users, and get them to see the feature working in the actual product. Such usability testing will help determine whether the determined final feature is actually something that the users can accept, or whether there are problems that need to be modified.
The timing of such user testing is most critical. Typically, such workflows reach a final form close to the end of the cycle, and this is the form in which users can actually exercise the workflows. However, in a contra effect, this time is also very late in the cycle, and the team will be hesitant to accept changes that are significant, since the amount of time required to make these changes may not be easily available.
What is the solution ? The solution that seems to work is to have a much more active involvement with users, starting with showing them mockups as the workflow gets more concrete, active question and answer sessions about what they may be looking for, till the time that they can review the actual product implementation. Further, if a workflow is very new and contentious, then it would make sense to try and complete it earlier. And finally, there needs to be time built into the schedule to take such changes.

Facebook activity