User & Task Analysis

Reflections on Unit 2 -ENGL 5388

In the midst of snow days and holidays, Unit 2 introduced us to the analysis of users and their tasks. On the heels of site visits which taught us to investigate through observation, the articles and discussion led us to consider new methods for selecting users (Caulton,2001; Kujala & Kauppinen, 2004) , a variety of arguments for determining the appropriate number of users to test (Faulkner,2003; Spool & Schroeder, 2001) , and a report on strategies used by usability practitioners when confronted with non-conforming outliers along the way (Følstad,Law, & Hornbæk, 2012) . Out of its normal win-lose pattern, Microsoft offered a win-win opinion on the selection of tasks when constructing usability tests. As an extra ingredient, Still called a curricular audible and we began a new team project to help introduce Morae, a software solution for facilitating, recording, and codifying observations within any computer-based usability test. Among the readings, I found most interesting Vatrapu’s discussion of observed differences in facilitator-participant relationships that resulted in cross-cultural usability interactions (Vatrapu & Perez-Quinones, 2006) . The article made me consider more thoughtfully the interactions our team had with our first Morae test participants, and prompted me to speculate on other possibilities. Namely, I began to wonder about male-female interactions and other participant-facilitator differences that might make an impact in our testing.

Beyond these contemplations, the heart of this unit has brought, at the core, a recognition that beneficial testing of any product’s usability will require the essential ingredients of 1) whom to test, 2) how many to test, 3) what to test, and 4) what to do when they don’t all agree. Writing this reflective unit summary was helpful for me, personally, because it helped me to see more clearly why each of the articles were important in preparing us for the tasks ahead. In the meantime, the new Morae-based test was instructive as it allowed us to become familiar with new software even while becoming more routinely accustomed to the structure and rigor expected within the environment of a formal usability test.

I have found curious the interesting balance needed as we traverse the tightrope of client-consultant relations; behaving professionally without embarrassing ourselves in the process of experiential learning has been a new experience for all involved on my team. There is a lot within this particular unit that I value because Dr. Still allowed us to quickly move beyond the readings and discussion to “jump in and get our hands dirty.”

As one who particularly values the ideal model as a best practice that provides a practical understanding of the theoretical principle, I’ve personally found the vacuum to be the participant’s experience in an excellent usability test. I’d really like to see it done properly, not as a learning exercise, so that I might more thoughtfully contribute to the creation of new usability tests in the future. I suppose that I’m wanting to sit in the usability test as a user and to watch with my eyes (see) what the expert is doing (do), in order to better understand what I’m reading and hearing in class (say).

As an aside, this whole line of questioning prompts me to consider the viability of a usability test that takes as its subject our university teaching model, the order of its seesay, and do processes, and the (in)validity of involving students as users who define the expected/desired/planned tests of any test the system might undergo.

With specific regard to our testing experience, the Unit’s content has been incredibly instructive. Though our primary focus was on learning the Morae technology as a tool for future use, I think my teammates would agree that we saw the impact of our convenient sample, its small size, and the variety in our results as substantial inhibitors to any real understanding of Microsoft Word’s usability for the typical undergraduate writer. By surveying the potential users in advance I believe we gained much in creating a hybrid of user and heuristically defined tasks for testing; it is my strong opinion that this approach will be key in making our future testing increasingly relevant, reliable, and valid. We also gained a valuable understanding of the Morae software, experienced further coherence and flexibility as collaborative members of a testing unit, and have a portfolio of success and failure in and beyond the testing environment that will thoroughly inform our planning, prototyping, client interactions, and testing protocols for the semester project.

As a team member, I’m quite pleased with the challenges and successes that our group has experienced together. I particularly appreciate the approach our group has taken to work together in a healthy, collaborative, learning-focused manner, and I hope that we continue that as the balance of our class grade draws closer in our final project. I think that we’ve established expectations and processes that are characteristic of healthy teams, and I project these will serve us well as we engage the work of Units 3, 4, and 5.

Questions

  1. How can a usability consultant help his or her potential client understand the potential outcomes of usability testing in order to establish better expectations prior to the beginning of work?
  2. While the on-site (and distributed) Morae tests can be very helpful in the scientific validity of a computerized usability test, what other tools exist for different types of usability tests, e.g., a child’s use of a Kindle Fire, a homemaker’s use of a cookbook, the physician’s use of a handheld device in diagnosis and documentation, etc.
  3. Are there available any best practice examples of usability studies that include all of the necessary elements, e.g.,requirements gathering, test design, test execution (including video),analysis, and outcomes?

References

Caulton, D.A. (2001). Relaxing the homogeneity assumption in usability testing. Behaviour & Information Technology20(1), 1–7.

Faulkner, L. (2003). Beyond the five-user assumption: Benefits of increased sample sizes in usability testing. Behavior Research Methods35(3),379–383.

Følstad, A., Law, E. L. C., & Hornbæk, K. (2012). Outliers in usability testing: How to treat usability problems found for only one test participant? (pp. 257–260). Presented at the Proceedings of the 7th Nordic Conference on Human-Computer Interaction: Making Sense Through Design, ACM.

Kujala, S., & Kauppinen, M. (2004). Identifying and selecting users for user-centered design. In Proceedings of the third Nordic conference on Human-computer interaction (pp. 297–303). Retrieved from http://dl.acm.org/citation.cfm?id=1028060

Spool, J., & Schroeder, W. (2001). Testing web sites: Five users is nowhere near enough. In CHI’01 extended abstracts on Human factors in computing systems (pp. 285–286). Retrieved fromhttp://dl.acm.org/citation.cfm?id=634236

Vatrapu, R., & Perez-Quinones, M. A. (2006). Culture and usability evaluation: The effects of culture in structured interviews. In Journal of Usability Studies. Retrieved fromhttp://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.101.5837

What is Usability?

Reflections on Unit 1 ? ENGL 5388

Unit 1 has challenged us to answer the question, “What is usability?” As of today, my answer is, “one particular tool that uses scientific techniques to examine observed user behavior en route to another iteration of a product’s progressive and continual improvement.” No, that’s too wordy. “Usability is the therblig of user-centered design.” Much better; Grice would be proud. In an attempt to explain this summary, what follows are my thoughts from the readings, assignments, and discussion of Unit 1 in Still’s ENGL 5388 course entitled Usability.

Though it would be much cleaner if all the disciplines remained within their lines, I’m constantly reminded that each discipline offers its own particular lens through which we are allowed to observe and understand reality. Late the other night, when engaged in an internal debate over my questions, I decided to re-read some of the articles. I’m thankful that my eyes fell on the following statement, “Technical communicators weren’t the only ones working on these issues  (Redish, 2010, p. 194) .” Buoyed, I took some time to draw out what I know, what I think, and what I’m questioning. I’m not ready to discuss all of that yet, but here’s a start. I keep thinking about the triangles:

  1. See
  2. Say
  3. Do

So far this semester, each experience where I consider and use these three tools leaves me reaching back to my waist; the tool belt seems to be missing some essential ingredient. Maybe it’s the rhetorician in me, maybe it’s the educator; but as I’m trying to place my finger on what is missing, I’ve become frustrated. I’m set at ease, however, when I liken the usability triad to another framework. I keep finding parallels with the domains of learning engaged by psychologists and educators:

  1. Affective
  2. Cognitive
  3. Psychomotor

It seems to me that, if we’re watching with our eyes (see) what they are doing (do), then we’re referring to one vertex, not two. These combined doing elements might be understood as our investigation of the user’s psychomotor learning. In contrast, as one example, hearing users think aloud while doing helps us to have far more insight, that is, if our goal is to build a product that is actually more usable than not. From their doing we can infer what they know (cognitive), and we might even presume to understand what they believe (affective). Becker might call this, “their context (2004, para. 3–4).”

Some may say, “Who cares what they believe,” or “We’re about to change what they know.” True, the efficiency of a highly usable web site or other product may not require a particular belief system, and our users may actually arrive with a slate that is blank and ready to be written. It is my opinion, however, that this is highly unlikely. I’m convinced that we’ll miss the boat without the saycomponent that helps us to hear what they’re thinking during (think aloud method) or after (queued recall) a usability test.

Thus far, our readings, class discussions, the observation assignment, and the team’s paper prototyping exercise have done much to deepen my thoughts on how to more scientifically consider user behaviors in the process of iterative design. Even so, I’m looking forward to future conversations that engage the more rhetorical meaning-making processes, motivations, and beliefs of our users. In the meantime, this class has already done much to help me consider the value of approaching improvement more scientifically, and I agree with Wixon (2011, p.198) when he argues against those who say “usability is outmoded” and “has been replaced by user-centered design.”

I suspect that the insights of rigorous usability testing will provide the essential ingredients for subsequent conversations that spawn innovations in design, better affordances for diverse users and user communities, and altogether improved (and continually improving) outcomes. Still says as much in his chapter that analogizes usability and ecosystems (2010, p.93). I think it would be very interesting to search for other disciplinary parallels when looking at each of the tools within the usability toolbelt. One might begin with the table that introduces “Various Implementations of Contextual Inquiry (Raven & Flanders, 1996, p. 4).” Like my comments above regarding the learning domains and like Still’s discussion of ecosystems, I suspect there are other frameworks and strategies that usability experts would be wise to integrate rather than reinvent.

Looking forward at the balance of this semester, I’m particularly intrigued to observe and experience the eye-tracking technology that will allow us to observe (and record) user behaviors, but most especially because it seems that these detailed and highly accurate observations will provide the necessary footholds to continue our ascent upon the Everest of sustained, beneficial, and continual improvement.

Questions

1. Am I off-base to look at usability as a tool (or set of tools) within the larger purpose of user-centered design?

2. Considering our backgrounds in a variety of disciplines, what other meaning-making strategies and practices can our class think of; what would be useful as we develop our own understanding of usability?

3. Are there available any best practice examples of usability studies that include all of the necessary elements, e.g., requirements gathering, test design, test execution (including video), analysis, and outcomes?

References

Becker, L.(2004, June 15). 90% of All Usability Testing is Worthless. Adaptive Path. Blog. Retrieved February 10, 2013, from http://www.adaptivepath.com/ideas/e000328

Raven, M. E., & Flanders, A. (1996). Using contextual inquiry to learn about your audiences. ACM SIGDOC Asterisk Journal of Computer Documentation20(1), 1–13.

Redish, J. (2010). Technical communication and usability: Intertwined strands and mutual influences. IEEE Transactions on Professional Communication53(3), 191–201.

Still, B. (2010). Mapping Usability: An Ecologial Framework for Analyzing User Experience. Usability of Complex Information Systems, 89.

Wixon, D. (2011). The unfulfilled promise of usability engineering. Journal of Usability Studies6(4), 198–203.

October 20 Discussion Guide: Knowledge: How do technical communicators construct knowledge?

For October 20, we are reviewing only two articles (see the full references below).

The learning objective for this class focuses us on exploring the construction of knowledge, particularly as this is done by technical communicators. The question we are exploring is

“How do technical communicators construct knowledge?”

This question specifically targets the knowledge construction of technical communicators, which may be difficult without first examining the human process of constructing knowledge. Before we are able to discuss or even see clearly what our authors are saying, it seems that a broader initial approach may be valuable. As an educator whose students previous experiences have been (generally) akin to swallowing someone else’s knowledge whole, I relish every opportunity to help them learn how to chew, savor, and fully digest knowledge on their own.

This is one of my favorite topics and I hope you will fully engage in and enjoy the process we’re using this week to learn about knowledge construction through a collective analysis and composition experience, followed by our classroom discussion. So here’s the plan:

  1. Spend 00:04:30 watching an online commentary on undergraduate learning through collaborative composition: http://youtu.be/dGCJ46vyR9o
  2. Spend 00:04:34 watching an online summary video on how our digital composition, organization, and distribution mechanisms are fundamentally changing the ways in which we communicate, compose, and collaborate: http://youtu.be/NLlGopyXT_g
  3. Spend 00:02:51 watching an online video about what GoogleDocs is and the fundamental ways of how it works: http://youtu.be/eRqUE6IHTEA
  4. Participate actively in the following GoogleDocs
  5. Come prepared to enjoy the ensuing discussion.
  6. Watch the keynote referenced below, beginning at 00:13:10 and consider Wesch’s triangle of knowledge-ability versus Sullivan & Porter’s triangle of praxis in research.
  7. Comment here to continue the interaction and extend our class beyond its typical time/space constraints.

References

Harrison, T. M. (2004). Frameworks for the Study of Writing in Organizational Contexts. In J. Johnson-Eilola & S. A. Selber (Eds.), Central works in technical communication (pp. 255-267). New York: Oxford University Press.

Sullivan, P., & Porter, J. E. (2004). On Theory, Practice, and Method: Toward a Heuristic Research Methodology for Professional Writing. In J. Johnson-Eilola & S. A. Selber (Eds.), Central works in technical communication (pp. 300-316). New York: Oxford University Press.

Wesch, M. (2010, June 24). Knowledge-able. Opening Plenary presented at the STLHE Annual Conference 2010, Ryerson University. Retrieved from http://j.mp/wesch_at_ryersonon_2010-jun-24. Skip to 13:10 for beginning of actual presentation.