Low cost test of menu texts

When I had presented this method, a girl came to me, gave me her card with a text on the back: "Thanks for a good method, I will start using it tomorrow." I have later heard that the method not only is used for testing words, texts, but also icons and possibly what difference the order of the words in amenu makes.

The method in 3 brief steps


Make a list of texts you want to use in your menu and put them in front of the user.

These are the texts you want to test.

As usual, you test one user at a time. In most cases you need to test 5-8 users.


Make a list of tasks the user shall use your web site to solve, in this case all the tasks consists of finding specific pieces of information.

Ask the user about one task at a time, and get him or her to tell which menu texts he would enter to solve the task.


Make a form with a column for each menu and a line for each task. Use the form during the test to note down the answers.

Each time a user will enter a specific menu text in order to do a task, you put a mark in the square where they meet on the form.

Repeat with all the users, and you have an overview of which texts that are easy to use and when the user gets confused.

All the details about how to do, as presentend at the Nordichi2000, in October 2000

The precise choice of categories and words in menu texts are crucial for the usability. If the user makes an error at top level, it may be impossible for her to find what she is looking for.

Some words display the pop out effect (Kahneman 1984): They tend to draw the user ’s attention almost no matter what she is looking for. Other words tend for unknown reasons to be overlooked, or they do not give the user sufficient scent (CHI2000): The words do not indicate clearly the direction to the item the user wants to reach.

A word may have a highly personal meaning for the designer choosing it and a group tends to agree on abstract wordings, which are open to many possible associations and interpretations. It is necessary to test the precise words and categories to be used.


I have twice used a new method for testing menu texts and categories, and will describe the first application in details and the second briefly.

The new method was first applied during the development of a web site with product information for sales people, about 120 web pages and items for download. Based on interviews with two sales people I made 11 suggested menu texts. Two other sales people were asked to produce a list of topics for use in the text. The list was combined with results from the interview into a list with a total of 38 information items which salespeople typically would look for at a product web site.

The test was then done as follows:

I found comments made by the users very useful for understanding how each menu text was perceived. Therefore, I started to note such comments down.

During the processing of results, I looked for:

The results of the test were used when making a prototype of the structure for the web-site. The prototype was usability tested and the almost finished web-site was usability tested a few weeks before completion. I have recently applied the method when testing menu texts for mobile phones. The texts were tested with a list of 80 tasks based on previously identified user scenarios. The participants did first a card-sorting, distributing the tasks into the groups they found natural. The test of the menu texts was then done as described earlier.

The results of the card-sorting and the test of the menu texts were processed independently.


The first application of the test identified effectively problems with menu texts, for instance:

The menu test resulted in a total of 8 changes to the 11 texts; the later usability tests resulted in respectively 3 and 2 changes. The test of menu texts identified most problems.

The second application of the test of menu texts identified similar problems as the first: An instance of the pop out effect, abstract texts and texts with no scent.

I found the processing of results was easier if a list of the expected answers were made and used as a reference during the processing of the results of the menu test. The first application of the test took about 3 days. The second application took about 7 days, of which about 4 days were spent doing and processing the results of the card-sorting.


The test of menu texts is more realistic than a card-sorting: The user shall normally find a specific item in a hierarchical structure, not design a structure on his own. In addition, the card sorting generates more data and diverse categories, including subjective and highly personal categories. This makes the processing of data from the card sorting substantially more difficult than the processing of data from the menu test. The test of menu texts are at least as effective as a normal usability test for identifying problems on the top level. It can cover 80 topics, whereas a user in a normal usability test typically can try 10-15 tasks, it is faster to conduct and it requires only the menu texts, not even a paper prototype.

A test of menu texts can replace one of the usability tests during the development, but it cannot fully replace usability tests or card-sorting:


The test of menu texts is more realistic than a card- sorting. It makes it possible to test menu texts with more tasks in less time than with a normal usability test, and it does not require a prototype. That is important, when design of prototypes tend to become projectsof their own.


Chi, Ed.H., Peter Pirolli and James Pitkow. (2000). The Scent of a Site: A System for Analyzing and Predicting Information Scent, Usage and Usability of a Web Site . CHI 2000 Conference Proceedings,161-168.
Kahneman, Daniel and Anne Treisman. (1984). Changing views of attention and automaticity. Varieties of Attention, ed. by Raja Parasuraman and D.R. Davies. Academic Press Inc., Florida.