5363 Presentation: TCR Analyzes Academic Advising Artifacts

httpv://youtu.be/9oF4cFiQnY8

Findings

Stuctural Item/Sample # 1 2 3 4 5 6 7 8 9
Date 7/13/2009 3/22/2012 8/7/2013 10/29/2012 4/22/2013 9/11/2012 2/28/2013 11/20/2012 5/9/2013
Participant/Student Devin Devin Devin Harry Harry Harry Sam Sam Sam
Participant/Advisor Tina Jared Jared Patty Patty Patty Tandi Tandi Marty
Reason Red Raider Orientation Discovery! Major Map Debrief Campus Resource Advising Discovery! Major Map Debrief Course Selection Advising Discovery! Major Map Debrief Discovery! Major Map Debrief Course Selection Advising Course Selection Follow-Up
Eclectic-Axial Coding Word Count 410 221 46 293 162 122 322 298 137
Develop a Plan: Academic Planning Majors 0 0 3 3 1 0 3 3 5
Connect: Relationships Formal Referrals 0 5 0 7 13 1 5 6 18
Solutions: Self-Awareness Assessments 0 0 0 3 1 0 1 2 1
Connect: Relationships Courtesy 1 1 1 0 1 1 1 1 1
Connect: Relationships Well-Check 0 0 0 1 0 0 0 0 0
Solutions: Self-Awareness Review 1 2 1 1 1 0 1 1 1
Connect: Situation is Not Unique Reassure 1 0 0 0 0 0 0 0 0
Develop a Plan Challenge 1 0 0 0 0 0 0 1 0
Connect: Holistic Engagement 0 0 0 0 0 0 0 1 0
Develop a Plan: Set Goals Goal 1 1 0 0 1 1 1 0 1
Internal & External Causes Acknowledgement 1 1 0 0 1 0 2 0 0
Develop a Plan: Set Goals Courses 7 0 0 4 4 5 4 4 1
Internal & External Causes Caution 1 0 0 0 0 1 0 1 1
Work the Plan Action 4 4 3 6 1 2 4 1 1
Connect: Relationships Referrals 2 2 1 7 1 1 2 2 2
Internal & External Causes Impetus/Caveat 0 0 0 0 0 0 1 0 1
Develop a Plan: Consider Options Alternatives 0 0 0 0 0 0 1 0 0
Connect: Relationships Access 1 1 1 1 1 1 1 1 1
Work the Plan Procedural 1 0 0 0 1 0 2 4 1
Connect: Relationships Courtesy Closing 1 1 1 1 0 0 1 1 1
Combat Ignorance: Procedural Information Reg Check 0 0 0 0 1 0 0 0 0

*Scanned copies of the actual artifacts (post-analysis) are forthcoming.

References

  1. Hagen PL. Academic advising as dialectic. Nacada J. 1994;14(2):85–88. doi:10.12930/0271-9517-14.2.85.
  2. Taylor MC. End the university as we know it. New York Times. http://www.murraystate.edu/president/news/4-2009.PDF. Published 2009. Accessed April 26, 2013.
  3. Taylor MC. Crisis on campus: A bold plan for reforming our colleges and universities. 1st ed. New York: Alfred A. Knopf; 2010.
  4. Arum R, Roksa J. Academically adrift: Limited learning on college campuses. University of Chicago Press; 2011.
  5. Fowler PR, Boylan HR. Increasing student success and retention: A multidimensional approach. J Dev Educ. 2010;34(2):2–4,6,8–10. Available at: http://eric.ed.gov/?id=EJ986268.
  6. Appleby DC. The teaching-advising connection. Teach Psychol Essays Honor Wilbert J Mckeachie Charles Brew. 2002:121–139. Available at: http://books.google.com/books?hl=en&lr=&id=j5Fq_M-uR6IC&oi=fnd&pg=PA121&ots=HctJkBwsh8&sig=r865OuR5DP9fKCNNMF4JyqGH3NA. Accessed November 8, 2013.
  7. John-Steiner V, Mahn H. Sociocultural approaches to learning and development: A Vygotskian framework. Educ Psychol. 1996;31(3-4):191–206.
  8. Abelman R, Dalessandro A, Janstova P, Snyder-Suhy S, Pettey G. Charting the verbiage of institutional vision: Implications for academic advising. Nacada J. 2007;27(1):22–38. doi:10.12930/0271-9517-27.1.22.
  9. Beck A. Advising undecided students: Lessons from chaos theory. Nacada J. 1999;19(1):45–49. Available at: http://j.mp/19wGnkw. Accessed October 12, 2012.
  10. Champlin-Scharff S. Advising with understanding: Considering hermeneutic theory in academic advising. Nacada J. 2010;30(1):59–65. Available at: http://j.mp/1jeq2X8.
  11. Reardon R, Bullock E. Holland’s theory and implications for academic advising and career counseling. Nacada J. 2004;24(1):111–122. Available at: http://www.eric.ed.gov/ERICWebPortal/detail?accno=EJ808097. Accessed October 12, 2012.
  12. Spicuzza FJ. A customer service approach to advising: theory and application. Nacada J. 1992;12(2):49–58. Available at: http://j.mp/1ajdZGX. Accessed July 23, 2013.
  13. Aristotle, Freese JH. The “art” of rhetoric. Cambridge, Mass.: Harvard University Press; 1975.
  14. Conley TM. Rhetoric in the European tradition. Chicago: University of Chicago Press; 1994.
  15. Carliner S. The three approaches to professionalization in technical communication. Tech Commun. 2012;59(1):49–65. Available at: http://j.mp/1cCM6s8.
  16. Bandura A. Self-efficacy: Toward a unifying theory of behavioral change. Psychol Rev. 1977;84(2):191.
  17. Berger LL. Applying new rhetoric to legal discourse: The ebb and flow of reader and writer, text and context. J Leg Educ. 1999;49(159). Available at: http://j.mp/1aLAe43. Accessed October 29, 2013.
  18. Brown T. Chapter 20: Critical concepts in advisor training and development. In: Academic advising: A comprehensive handbook. 2nd ed. San Francisco, CA: Jossey-Bass; 2008:309–322.
  19. Corner A, Hahn U. Message framing, normative advocacy and persuasive success. argumentation. 2010;24(2):153–163. Available at: http://j.mp/1hQ531i. Accessed October 31, 2013.
  20. Feng B, Burleson BR. The effects of argument explicitness on responses to advice in supportive interactions. Commun Res. 2008;35(6):849–874. Available at: http://crx.sagepub.com/content/35/6/849.short. Accessed October 31, 2013.
  21. Hanson AV. Aristotelian appeals in corporate communication: Tracing the communication patterns in an organizational division moving to intranet documentation. 1999. Available at: http://hdl.handle.net/2346/8381. Accessed October 29, 2013.
  22. Zachry M, Cargile Cook K, Faber BD, Clark D. The changing face of Technical Communication: New directions for the field in a new millennium. In: Proceedings of the 19th Annual International Conference on Computer Documentation. SIGDOC  ’01. New York, NY, USA: ACM; 2001:248–260. doi:10.1145/501516.501573.
  23. Brinol P, Petty RE, Valle C, Rucker DD, Becerra A. The effects of message recipients’ power before and after persuasion: a self-validation analysis. J Pers Soc Psychol. 2007;93(6):1040. Available at: http://psycnet.apa.org/psycinfo/2007-17941-009. Accessed November 8, 2013.
  24. McKee A. A beginner’s guide to textual analysis. Metro Mag. 2001;(127):138–149. Available at: http://eprints.qut.edu.au/41993/2/41993. Accessed November 8, 2013.
  25. Xiqing Sha, Chang KT. The role of leadership and contextualization on citizenship behaviors in distributed teams: A relational capital perspective. Ieee Trans Prof Commun. 2012;55(4):310–324. doi:10.1109/TPC.2012.2188595.
  26. Saldana J. The coding manual for qualitative researchers. 2nd ed. Los Angeles: SAGE; 2013.
  27. Rickly RJ. Exploring the dimensions of discourse: A multi-modal analysis of electronic and oral discussion in developmental writing. 1995. Available at: http://j.mp/RicklyDiss.
  28. Office of the CIO. IT Security Policies List. Tex Tech Univ. 2013. Available at: http://www.depts.ttu.edu/infotech/security/docs/index.php.

Music (Used under Creative Commons License)

Air Combat by Kevin X

An Adventure, Hero After the War, Hunting for Experience, & The Awakening by Epicus

User & Task Analysis

Reflections on Unit 2 -ENGL 5388

In the midst of snow days and holidays, Unit 2 introduced us to the analysis of users and their tasks. On the heels of site visits which taught us to investigate through observation, the articles and discussion led us to consider new methods for selecting users (Caulton,2001; Kujala & Kauppinen, 2004) , a variety of arguments for determining the appropriate number of users to test (Faulkner,2003; Spool & Schroeder, 2001) , and a report on strategies used by usability practitioners when confronted with non-conforming outliers along the way (Følstad,Law, & Hornbæk, 2012) . Out of its normal win-lose pattern, Microsoft offered a win-win opinion on the selection of tasks when constructing usability tests. As an extra ingredient, Still called a curricular audible and we began a new team project to help introduce Morae, a software solution for facilitating, recording, and codifying observations within any computer-based usability test. Among the readings, I found most interesting Vatrapu’s discussion of observed differences in facilitator-participant relationships that resulted in cross-cultural usability interactions (Vatrapu & Perez-Quinones, 2006) . The article made me consider more thoughtfully the interactions our team had with our first Morae test participants, and prompted me to speculate on other possibilities. Namely, I began to wonder about male-female interactions and other participant-facilitator differences that might make an impact in our testing.

Beyond these contemplations, the heart of this unit has brought, at the core, a recognition that beneficial testing of any product’s usability will require the essential ingredients of 1) whom to test, 2) how many to test, 3) what to test, and 4) what to do when they don’t all agree. Writing this reflective unit summary was helpful for me, personally, because it helped me to see more clearly why each of the articles were important in preparing us for the tasks ahead. In the meantime, the new Morae-based test was instructive as it allowed us to become familiar with new software even while becoming more routinely accustomed to the structure and rigor expected within the environment of a formal usability test.

I have found curious the interesting balance needed as we traverse the tightrope of client-consultant relations; behaving professionally without embarrassing ourselves in the process of experiential learning has been a new experience for all involved on my team. There is a lot within this particular unit that I value because Dr. Still allowed us to quickly move beyond the readings and discussion to “jump in and get our hands dirty.”

As one who particularly values the ideal model as a best practice that provides a practical understanding of the theoretical principle, I’ve personally found the vacuum to be the participant’s experience in an excellent usability test. I’d really like to see it done properly, not as a learning exercise, so that I might more thoughtfully contribute to the creation of new usability tests in the future. I suppose that I’m wanting to sit in the usability test as a user and to watch with my eyes (see) what the expert is doing (do), in order to better understand what I’m reading and hearing in class (say).

As an aside, this whole line of questioning prompts me to consider the viability of a usability test that takes as its subject our university teaching model, the order of its seesay, and do processes, and the (in)validity of involving students as users who define the expected/desired/planned tests of any test the system might undergo.

With specific regard to our testing experience, the Unit’s content has been incredibly instructive. Though our primary focus was on learning the Morae technology as a tool for future use, I think my teammates would agree that we saw the impact of our convenient sample, its small size, and the variety in our results as substantial inhibitors to any real understanding of Microsoft Word’s usability for the typical undergraduate writer. By surveying the potential users in advance I believe we gained much in creating a hybrid of user and heuristically defined tasks for testing; it is my strong opinion that this approach will be key in making our future testing increasingly relevant, reliable, and valid. We also gained a valuable understanding of the Morae software, experienced further coherence and flexibility as collaborative members of a testing unit, and have a portfolio of success and failure in and beyond the testing environment that will thoroughly inform our planning, prototyping, client interactions, and testing protocols for the semester project.

As a team member, I’m quite pleased with the challenges and successes that our group has experienced together. I particularly appreciate the approach our group has taken to work together in a healthy, collaborative, learning-focused manner, and I hope that we continue that as the balance of our class grade draws closer in our final project. I think that we’ve established expectations and processes that are characteristic of healthy teams, and I project these will serve us well as we engage the work of Units 3, 4, and 5.

Questions

  1. How can a usability consultant help his or her potential client understand the potential outcomes of usability testing in order to establish better expectations prior to the beginning of work?
  2. While the on-site (and distributed) Morae tests can be very helpful in the scientific validity of a computerized usability test, what other tools exist for different types of usability tests, e.g., a child’s use of a Kindle Fire, a homemaker’s use of a cookbook, the physician’s use of a handheld device in diagnosis and documentation, etc.
  3. Are there available any best practice examples of usability studies that include all of the necessary elements, e.g.,requirements gathering, test design, test execution (including video),analysis, and outcomes?

References

Caulton, D.A. (2001). Relaxing the homogeneity assumption in usability testing. Behaviour & Information Technology20(1), 1–7.

Faulkner, L. (2003). Beyond the five-user assumption: Benefits of increased sample sizes in usability testing. Behavior Research Methods35(3),379–383.

Følstad, A., Law, E. L. C., & Hornbæk, K. (2012). Outliers in usability testing: How to treat usability problems found for only one test participant? (pp. 257–260). Presented at the Proceedings of the 7th Nordic Conference on Human-Computer Interaction: Making Sense Through Design, ACM.

Kujala, S., & Kauppinen, M. (2004). Identifying and selecting users for user-centered design. In Proceedings of the third Nordic conference on Human-computer interaction (pp. 297–303). Retrieved from http://dl.acm.org/citation.cfm?id=1028060

Spool, J., & Schroeder, W. (2001). Testing web sites: Five users is nowhere near enough. In CHI’01 extended abstracts on Human factors in computing systems (pp. 285–286). Retrieved fromhttp://dl.acm.org/citation.cfm?id=634236

Vatrapu, R., & Perez-Quinones, M. A. (2006). Culture and usability evaluation: The effects of culture in structured interviews. In Journal of Usability Studies. Retrieved fromhttp://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.101.5837

What is Usability?

Reflections on Unit 1 ? ENGL 5388

Unit 1 has challenged us to answer the question, “What is usability?” As of today, my answer is, “one particular tool that uses scientific techniques to examine observed user behavior en route to another iteration of a product’s progressive and continual improvement.” No, that’s too wordy. “Usability is the therblig of user-centered design.” Much better; Grice would be proud. In an attempt to explain this summary, what follows are my thoughts from the readings, assignments, and discussion of Unit 1 in Still’s ENGL 5388 course entitled Usability.

Though it would be much cleaner if all the disciplines remained within their lines, I’m constantly reminded that each discipline offers its own particular lens through which we are allowed to observe and understand reality. Late the other night, when engaged in an internal debate over my questions, I decided to re-read some of the articles. I’m thankful that my eyes fell on the following statement, “Technical communicators weren’t the only ones working on these issues  (Redish, 2010, p. 194) .” Buoyed, I took some time to draw out what I know, what I think, and what I’m questioning. I’m not ready to discuss all of that yet, but here’s a start. I keep thinking about the triangles:

  1. See
  2. Say
  3. Do

So far this semester, each experience where I consider and use these three tools leaves me reaching back to my waist; the tool belt seems to be missing some essential ingredient. Maybe it’s the rhetorician in me, maybe it’s the educator; but as I’m trying to place my finger on what is missing, I’ve become frustrated. I’m set at ease, however, when I liken the usability triad to another framework. I keep finding parallels with the domains of learning engaged by psychologists and educators:

  1. Affective
  2. Cognitive
  3. Psychomotor

It seems to me that, if we’re watching with our eyes (see) what they are doing (do), then we’re referring to one vertex, not two. These combined doing elements might be understood as our investigation of the user’s psychomotor learning. In contrast, as one example, hearing users think aloud while doing helps us to have far more insight, that is, if our goal is to build a product that is actually more usable than not. From their doing we can infer what they know (cognitive), and we might even presume to understand what they believe (affective). Becker might call this, “their context (2004, para. 3–4).”

Some may say, “Who cares what they believe,” or “We’re about to change what they know.” True, the efficiency of a highly usable web site or other product may not require a particular belief system, and our users may actually arrive with a slate that is blank and ready to be written. It is my opinion, however, that this is highly unlikely. I’m convinced that we’ll miss the boat without the saycomponent that helps us to hear what they’re thinking during (think aloud method) or after (queued recall) a usability test.

Thus far, our readings, class discussions, the observation assignment, and the team’s paper prototyping exercise have done much to deepen my thoughts on how to more scientifically consider user behaviors in the process of iterative design. Even so, I’m looking forward to future conversations that engage the more rhetorical meaning-making processes, motivations, and beliefs of our users. In the meantime, this class has already done much to help me consider the value of approaching improvement more scientifically, and I agree with Wixon (2011, p.198) when he argues against those who say “usability is outmoded” and “has been replaced by user-centered design.”

I suspect that the insights of rigorous usability testing will provide the essential ingredients for subsequent conversations that spawn innovations in design, better affordances for diverse users and user communities, and altogether improved (and continually improving) outcomes. Still says as much in his chapter that analogizes usability and ecosystems (2010, p.93). I think it would be very interesting to search for other disciplinary parallels when looking at each of the tools within the usability toolbelt. One might begin with the table that introduces “Various Implementations of Contextual Inquiry (Raven & Flanders, 1996, p. 4).” Like my comments above regarding the learning domains and like Still’s discussion of ecosystems, I suspect there are other frameworks and strategies that usability experts would be wise to integrate rather than reinvent.

Looking forward at the balance of this semester, I’m particularly intrigued to observe and experience the eye-tracking technology that will allow us to observe (and record) user behaviors, but most especially because it seems that these detailed and highly accurate observations will provide the necessary footholds to continue our ascent upon the Everest of sustained, beneficial, and continual improvement.

Questions

1. Am I off-base to look at usability as a tool (or set of tools) within the larger purpose of user-centered design?

2. Considering our backgrounds in a variety of disciplines, what other meaning-making strategies and practices can our class think of; what would be useful as we develop our own understanding of usability?

3. Are there available any best practice examples of usability studies that include all of the necessary elements, e.g., requirements gathering, test design, test execution (including video), analysis, and outcomes?

References

Becker, L.(2004, June 15). 90% of All Usability Testing is Worthless. Adaptive Path. Blog. Retrieved February 10, 2013, from http://www.adaptivepath.com/ideas/e000328

Raven, M. E., & Flanders, A. (1996). Using contextual inquiry to learn about your audiences. ACM SIGDOC Asterisk Journal of Computer Documentation20(1), 1–13.

Redish, J. (2010). Technical communication and usability: Intertwined strands and mutual influences. IEEE Transactions on Professional Communication53(3), 191–201.

Still, B. (2010). Mapping Usability: An Ecologial Framework for Analyzing User Experience. Usability of Complex Information Systems, 89.

Wixon, D. (2011). The unfulfilled promise of usability engineering. Journal of Usability Studies6(4), 198–203.