Unit 1 has challenged us to answer the question, “What is usability?” As of today, my answer is, “one particular tool that uses scientific techniques to examine observed user behavior en route to another iteration of a product’s progressive and continual improvement.” No, that’s too wordy. “Usability is the therblig of user-centered design.” Much better; Grice would be proud. In an attempt to explain this summary, what follows are my thoughts from the readings, assignments, and discussion of Unit 1 in Still’s ENGL 5388 course entitled Usability.
Though it would be much cleaner if all the disciplines remained within their lines, I’m constantly reminded that each discipline offers its own particular lens through which we are allowed to observe and understand reality. Late the other night, when engaged in an internal debate over my questions, I decided to re-read some of the articles. I’m thankful that my eyes fell on the following statement, “Technical communicators weren’t the only ones working on these issues (Redish, 2010, p. 194) .” Buoyed, I took some time to draw out what I know, what I think, and what I’m questioning. I’m not ready to discuss all of that yet, but here’s a start. I keep thinking about the triangles:
So far this semester, each experience where I consider and use these three tools leaves me reaching back to my waist; the tool belt seems to be missing some essential ingredient. Maybe it’s the rhetorician in me, maybe it’s the educator; but as I’m trying to place my finger on what is missing, I’ve become frustrated. I’m set at ease, however, when I liken the usability triad to another framework. I keep finding parallels with the domains of learning engaged by psychologists and educators:
It seems to me that, if we’re watching with our eyes (see) what they are doing (do), then we’re referring to one vertex, not two. These combined doing elements might be understood as our investigation of the user’s psychomotor learning. In contrast, as one example, hearing users think aloud while doing helps us to have far more insight, that is, if our goal is to build a product that is actually more usable than not. From their doing we can infer what they know (cognitive), and we might even presume to understand what they believe (affective). Becker might call this, “their context (2004, para. 3–4).”
Some may say, “Who cares what they believe,” or “We’re about to change what they know.” True, the efficiency of a highly usable web site or other product may not require a particular belief system, and our users may actually arrive with a slate that is blank and ready to be written. It is my opinion, however, that this is highly unlikely. I’m convinced that we’ll miss the boat without the saycomponent that helps us to hear what they’re thinking during (think aloud method) or after (queued recall) a usability test.
Thus far, our readings, class discussions, the observation assignment, and the team’s paper prototyping exercise have done much to deepen my thoughts on how to more scientifically consider user behaviors in the process of iterative design. Even so, I’m looking forward to future conversations that engage the more rhetorical meaning-making processes, motivations, and beliefs of our users. In the meantime, this class has already done much to help me consider the value of approaching improvement more scientifically, and I agree with Wixon (2011, p.198) when he argues against those who say “usability is outmoded” and “has been replaced by user-centered design.”
I suspect that the insights of rigorous usability testing will provide the essential ingredients for subsequent conversations that spawn innovations in design, better affordances for diverse users and user communities, and altogether improved (and continually improving) outcomes. Still says as much in his chapter that analogizes usability and ecosystems (2010, p.93). I think it would be very interesting to search for other disciplinary parallels when looking at each of the tools within the usability toolbelt. One might begin with the table that introduces “Various Implementations of Contextual Inquiry (Raven & Flanders, 1996, p. 4).” Like my comments above regarding the learning domains and like Still’s discussion of ecosystems, I suspect there are other frameworks and strategies that usability experts would be wise to integrate rather than reinvent.
Looking forward at the balance of this semester, I’m particularly intrigued to observe and experience the eye-tracking technology that will allow us to observe (and record) user behaviors, but most especially because it seems that these detailed and highly accurate observations will provide the necessary footholds to continue our ascent upon the Everest of sustained, beneficial, and continual improvement.
1. Am I off-base to look at usability as a tool (or set of tools) within the larger purpose of user-centered design?
2. Considering our backgrounds in a variety of disciplines, what other meaning-making strategies and practices can our class think of; what would be useful as we develop our own understanding of usability?
3. Are there available any best practice examples of usability studies that include all of the necessary elements, e.g., requirements gathering, test design, test execution (including video), analysis, and outcomes?
Becker, L.(2004, June 15). 90% of All Usability Testing is Worthless. Adaptive Path. Blog. Retrieved February 10, 2013, from http://www.adaptivepath.com/ideas/e000328
Raven, M. E., & Flanders, A. (1996). Using contextual inquiry to learn about your audiences. ACM SIGDOC Asterisk Journal of Computer Documentation, 20(1), 1–13.
Redish, J. (2010). Technical communication and usability: Intertwined strands and mutual influences. IEEE Transactions on Professional Communication, 53(3), 191–201.
Still, B. (2010). Mapping Usability: An Ecologial Framework for Analyzing User Experience. Usability of Complex Information Systems, 89.
Wixon, D. (2011). The unfulfilled promise of usability engineering. Journal of Usability Studies, 6(4), 198–203.