User & Task Analysis

Reflections on Unit 2 -ENGL 5388

In the midst of snow days and holidays, Unit 2 introduced us to the analysis of users and their tasks. On the heels of site visits which taught us to investigate through observation, the articles and discussion led us to consider new methods for selecting users (Caulton,2001; Kujala & Kauppinen, 2004) , a variety of arguments for determining the appropriate number of users to test (Faulkner,2003; Spool & Schroeder, 2001) , and a report on strategies used by usability practitioners when confronted with non-conforming outliers along the way (Følstad,Law, & Hornbæk, 2012) . Out of its normal win-lose pattern, Microsoft offered a win-win opinion on the selection of tasks when constructing usability tests. As an extra ingredient, Still called a curricular audible and we began a new team project to help introduce Morae, a software solution for facilitating, recording, and codifying observations within any computer-based usability test. Among the readings, I found most interesting Vatrapu’s discussion of observed differences in facilitator-participant relationships that resulted in cross-cultural usability interactions (Vatrapu & Perez-Quinones, 2006) . The article made me consider more thoughtfully the interactions our team had with our first Morae test participants, and prompted me to speculate on other possibilities. Namely, I began to wonder about male-female interactions and other participant-facilitator differences that might make an impact in our testing.

Beyond these contemplations, the heart of this unit has brought, at the core, a recognition that beneficial testing of any product’s usability will require the essential ingredients of 1) whom to test, 2) how many to test, 3) what to test, and 4) what to do when they don’t all agree. Writing this reflective unit summary was helpful for me, personally, because it helped me to see more clearly why each of the articles were important in preparing us for the tasks ahead. In the meantime, the new Morae-based test was instructive as it allowed us to become familiar with new software even while becoming more routinely accustomed to the structure and rigor expected within the environment of a formal usability test.

I have found curious the interesting balance needed as we traverse the tightrope of client-consultant relations; behaving professionally without embarrassing ourselves in the process of experiential learning has been a new experience for all involved on my team. There is a lot within this particular unit that I value because Dr. Still allowed us to quickly move beyond the readings and discussion to “jump in and get our hands dirty.”

As one who particularly values the ideal model as a best practice that provides a practical understanding of the theoretical principle, I’ve personally found the vacuum to be the participant’s experience in an excellent usability test. I’d really like to see it done properly, not as a learning exercise, so that I might more thoughtfully contribute to the creation of new usability tests in the future. I suppose that I’m wanting to sit in the usability test as a user and to watch with my eyes (see) what the expert is doing (do), in order to better understand what I’m reading and hearing in class (say).

As an aside, this whole line of questioning prompts me to consider the viability of a usability test that takes as its subject our university teaching model, the order of its seesay, and do processes, and the (in)validity of involving students as users who define the expected/desired/planned tests of any test the system might undergo.

With specific regard to our testing experience, the Unit’s content has been incredibly instructive. Though our primary focus was on learning the Morae technology as a tool for future use, I think my teammates would agree that we saw the impact of our convenient sample, its small size, and the variety in our results as substantial inhibitors to any real understanding of Microsoft Word’s usability for the typical undergraduate writer. By surveying the potential users in advance I believe we gained much in creating a hybrid of user and heuristically defined tasks for testing; it is my strong opinion that this approach will be key in making our future testing increasingly relevant, reliable, and valid. We also gained a valuable understanding of the Morae software, experienced further coherence and flexibility as collaborative members of a testing unit, and have a portfolio of success and failure in and beyond the testing environment that will thoroughly inform our planning, prototyping, client interactions, and testing protocols for the semester project.

As a team member, I’m quite pleased with the challenges and successes that our group has experienced together. I particularly appreciate the approach our group has taken to work together in a healthy, collaborative, learning-focused manner, and I hope that we continue that as the balance of our class grade draws closer in our final project. I think that we’ve established expectations and processes that are characteristic of healthy teams, and I project these will serve us well as we engage the work of Units 3, 4, and 5.

Questions

  1. How can a usability consultant help his or her potential client understand the potential outcomes of usability testing in order to establish better expectations prior to the beginning of work?
  2. While the on-site (and distributed) Morae tests can be very helpful in the scientific validity of a computerized usability test, what other tools exist for different types of usability tests, e.g., a child’s use of a Kindle Fire, a homemaker’s use of a cookbook, the physician’s use of a handheld device in diagnosis and documentation, etc.
  3. Are there available any best practice examples of usability studies that include all of the necessary elements, e.g.,requirements gathering, test design, test execution (including video),analysis, and outcomes?

References

Caulton, D.A. (2001). Relaxing the homogeneity assumption in usability testing. Behaviour & Information Technology20(1), 1–7.

Faulkner, L. (2003). Beyond the five-user assumption: Benefits of increased sample sizes in usability testing. Behavior Research Methods35(3),379–383.

Følstad, A., Law, E. L. C., & Hornbæk, K. (2012). Outliers in usability testing: How to treat usability problems found for only one test participant? (pp. 257–260). Presented at the Proceedings of the 7th Nordic Conference on Human-Computer Interaction: Making Sense Through Design, ACM.

Kujala, S., & Kauppinen, M. (2004). Identifying and selecting users for user-centered design. In Proceedings of the third Nordic conference on Human-computer interaction (pp. 297–303). Retrieved from http://dl.acm.org/citation.cfm?id=1028060

Spool, J., & Schroeder, W. (2001). Testing web sites: Five users is nowhere near enough. In CHI’01 extended abstracts on Human factors in computing systems (pp. 285–286). Retrieved fromhttp://dl.acm.org/citation.cfm?id=634236

Vatrapu, R., & Perez-Quinones, M. A. (2006). Culture and usability evaluation: The effects of culture in structured interviews. In Journal of Usability Studies. Retrieved fromhttp://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.101.5837

Leave a Reply

Your email address will not be published. Required fields are marked *