Human-Computer Interaction

🖥️Human-Computer Interaction Unit 12 – Usability Evaluation: Methods & Metrics

Usability evaluation is crucial for creating user-friendly products and systems. It involves assessing how well users can interact with interfaces to achieve their goals, focusing on effectiveness, efficiency, and satisfaction. Various methods, from user testing to heuristic evaluations, help identify issues and guide improvements. Planning and conducting usability studies require careful consideration of objectives, participants, and tasks. Analyzing both quantitative and qualitative data provides insights into user behavior and preferences. Key metrics like task success rates and satisfaction scores help measure usability, informing design decisions and iterative improvements.

Key Concepts in Usability Evaluation

  • Usability evaluation assesses how well users can interact with a product or system to achieve their goals
  • Focuses on measuring effectiveness, efficiency, and satisfaction of the user experience
  • Involves gathering both quantitative data (task completion time, error rates) and qualitative data (user feedback, observations)
  • Helps identify usability issues, areas for improvement, and validate design decisions
  • Conducted at various stages of the design process, from early prototypes to fully developed systems
    • Formative evaluation occurs during the design process to guide iterative improvements
    • Summative evaluation assesses the final product's usability before release
  • Considers the context of use, including the users' characteristics, tasks, and environment
  • Employs a range of methods, such as user testing, heuristic evaluation, and surveys, to gather comprehensive usability data

Types of Usability Evaluation Methods

  • User testing involves observing representative users as they perform tasks with the product or system
    • Can be conducted in a controlled lab setting or remotely using screen-sharing tools
    • Provides direct insights into user behavior, thoughts, and challenges
  • Heuristic evaluation is an expert-based method where usability professionals assess the interface against established usability principles (Nielsen's 10 heuristics)
  • Cognitive walkthroughs simulate a user's problem-solving process by having evaluators step through tasks and identify potential usability issues
  • Surveys and questionnaires gather self-reported data from users about their experiences, preferences, and satisfaction
    • System Usability Scale (SUS) is a widely used standardized questionnaire for measuring perceived usability
  • Interviews and focus groups allow for in-depth discussions with users to gain qualitative insights and explore their needs and expectations
  • Eye-tracking studies capture users' eye movements and fixations to understand visual attention and identify areas of interest or confusion
  • A/B testing compares two or more design variations to determine which performs better in terms of usability metrics (conversion rates, task completion)

Planning a Usability Study

  • Define clear study objectives and research questions aligned with the project goals and stage of development
  • Identify the target user group and recruit representative participants based on relevant characteristics (age, experience, domain knowledge)
    • Determine the appropriate sample size based on the study's scope and available resources
  • Develop realistic task scenarios that cover key user flows and functionality of the product or system
    • Tasks should be representative of real-world usage and prioritized based on importance and frequency
  • Prepare the test environment, including the necessary equipment (computers, cameras, screen-recording software) and ensuring a comfortable and distraction-free space
  • Create a test protocol outlining the study's procedure, instructions for participants, and data collection methods
  • Pilot test the study with a small group to identify and address any issues with the tasks, materials, or logistics before the main study
  • Obtain informed consent from participants and ensure their privacy and confidentiality are protected throughout the study

Conducting User Testing

  • Welcome participants and provide a brief introduction to the study's purpose and procedure
  • Obtain informed consent and gather necessary demographic information
  • Provide clear instructions and any necessary training on the product or system being tested
  • Observe participants as they complete the assigned tasks, taking notes on their behavior, comments, and any usability issues encountered
    • Encourage participants to think aloud, expressing their thoughts and reasoning as they interact with the interface
    • Avoid interfering or providing assistance unless necessary, allowing participants to explore and problem-solve independently
  • Probe for additional insights and clarification through follow-up questions after each task or at the end of the session
  • Administer post-test questionnaires (SUS, user satisfaction surveys) to gather subjective feedback and ratings
  • Debrief participants, thanking them for their time and contribution, and addressing any remaining questions or concerns
  • Compile and organize the collected data, including task performance metrics, observations, and participant feedback, for analysis

Analyzing Usability Data

  • Quantitative analysis involves calculating key usability metrics, such as task success rates, time on task, error rates, and efficiency
    • Use descriptive statistics (mean, median, standard deviation) to summarize and compare performance across tasks and user groups
    • Identify patterns, trends, and outliers in the data to highlight areas of strength or weakness in the user experience
  • Qualitative analysis focuses on interpreting user feedback, observations, and behavioral insights to uncover underlying usability issues and user needs
    • Thematic analysis involves coding and categorizing qualitative data to identify common themes and patterns
    • Affinity diagramming is a collaborative technique for grouping and synthesizing insights from multiple data sources
  • Triangulate findings from different evaluation methods (user testing, heuristic evaluation, surveys) to validate and prioritize usability issues
  • Conduct severity ratings to assess the impact and criticality of identified usability problems
    • Severity scales (minor, moderate, major, critical) help prioritize issues for remediation based on their effect on user performance and satisfaction
  • Generate actionable recommendations for design improvements based on the analysis, considering feasibility, development effort, and potential impact on the user experience

Metrics for Measuring Usability

  • Task success rate measures the percentage of participants who successfully complete a task without assistance or critical errors
  • Time on task captures the average time taken by participants to complete a specific task, indicating the efficiency of the interaction
  • Error rate counts the number of errors made by participants during task performance, reflecting the effectiveness and intuitiveness of the interface
  • System Usability Scale (SUS) provides a standardized score (0-100) of the perceived usability based on participant ratings of 10 statements
    • SUS scores above 68 are considered above average, while scores below 68 indicate potential usability concerns
  • Net Promoter Score (NPS) measures user loyalty and likelihood to recommend the product based on a single question rating (0-10)
  • User satisfaction ratings assess participants' subjective perceptions of the user experience, often using Likert scales (strongly disagree to strongly agree)
  • Conversion rates and click-through rates (for digital products) measure the percentage of users who complete desired actions (purchases, sign-ups, clicks)
  • Learnability metrics, such as time to first use or number of errors on initial attempts, indicate how easily users can learn and adopt the product or system

Reporting and Presenting Findings

  • Create a clear and concise usability report summarizing the study's objectives, methods, findings, and recommendations
    • Use a structured format with sections for executive summary, methodology, results, and discussion
    • Highlight key insights and prioritize usability issues based on their impact and severity
  • Visualize data using charts, graphs, and heatmaps to effectively communicate patterns and trends
    • Use task completion rates, time on task, and error rates to compare performance across tasks or user groups
    • Include user quotes and video clips to illustrate key findings and provide context
  • Present findings to stakeholders through engaging presentations or workshops
    • Tailor the content and level of detail to the audience's needs and expertise
    • Focus on the most critical usability issues and their impact on user experience and business goals
  • Facilitate discussions and collaborate with the design and development teams to translate findings into actionable design improvements
  • Establish a plan for iterative testing and evaluation to track the effectiveness of implemented changes and ensure continuous usability improvement

Applying Usability Insights to Design

  • Prioritize usability issues based on their severity, impact on user experience, and alignment with project goals and constraints
  • Collaborate with the design team to generate potential solutions and improvements for identified usability problems
    • Conduct ideation sessions and sketching workshops to explore a range of design alternatives
    • Use design principles (visibility, feedback, consistency) and best practices to guide the ideation process
  • Create iterative prototypes (low-fidelity to high-fidelity) to test and refine proposed design solutions
    • Use prototyping tools (Sketch, Figma, InVision) to quickly create and modify interactive mockups
    • Conduct usability testing on prototypes to validate design decisions and gather additional user feedback
  • Implement design changes incrementally, considering the development effort and potential impact on the overall user experience
  • Establish usability guidelines and design patterns to ensure consistency and adherence to best practices across the product or system
  • Continuously monitor and assess the effectiveness of implemented design improvements through ongoing usability evaluation and user feedback
    • Use analytics and user behavior data to track key usability metrics and identify areas for further optimization
    • Foster a culture of user-centered design and iterative improvement within the organization


© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.

© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.