CS6750 HCI Notes

Author: Taichi Nakatani

1.1 Introduction to HCI

Interaction:

  • Human
  • Computer
  • Task

We might be experts at interacting with computers, but that doesn't make us experts at designing interactions between other humans and computers.

Overview of HCI

  • Human factor engineering: Designing interactions between people and products, systems or devices.
    • Merger of engineering and psychology
  • HCI vs UI Design: UI design is more concerned with on-screen interaction.
  • HCI vs UX Design: UX design dicates how humans interact with computers (while HCI understand this interaction). Symbiotic relationship.

What is HCI:

  • Research: needfinding, prototyping, evaluation
  • Design: distributed cognition, mental models, universal design
  • research/design is symbiotic

Reference:

  • Richard Mander, Gitta Salomon, and Yin Yin Wong. 1992. A “pile” metaphor for supporting casual organization of information. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '92). Association for Computing Machinery, New York, NY, USA, 627–634. https://doi.org/10.1145/142750.143055
    • How do people organize rapid flow of information in their work spaces.
    • People organized things in 'piles' of related materials. Mimic it in computer interface.

1.2 Introduction to CS6750

Learning goals:

  1. Understand common principles in HCI
  2. Understand the design life cycle
  3. Understand current applications of HCI

Learning outcome: "To design effective interactions between humans and computers"

  • Design: Applying known principles to a new problem.
  • Design: An interative process of needfinding, prototyping, evaluating, and revising.
  • Effective interactions: usability, research, change
  • Between humans and computers: Not just interfaces, designing interactions.

Learning strategies:

  • learning by example
  • learning by doing
  • learning by reflection ("you are not your user")

1.3 Exploring HCI

New application areas:

  • Technologies: emerging technological capabilities that let us create new and interaction user interactions.
  • Domains: pre-existing areas that could be significantly disrupted by computer interfaces
  • Ideas: Theories about the way people interact with interfaces and the world around them.

Technology

  • Augmented Reality: Complements real world.
  • Ubiquitous Computing: Computing power anywhere, anytime. IoT. Wearable technology.
  • Human-robot interaction: Robot safety, societal impact, feedback/reinforcement systems
  • Mobile: Limitations in compute, but ubiquitous. Challenges in replacing computers.
  • Context-sensitive computing: Equipping UIs with historical, geographical, or other forms of contextual knowledge.

Idea

  • Gesture-based interaction: wrist-band motion
  • Pen/touch-based interaction
  • Information visualization: representing abstract data visually to help humans understand it
  • Computer-supported Cooperative Work (CSCW): Using computers to support people working together.
    • Divided by TIME and PLACE
  • Social Computing: How computers affect the way we interact and socialize
    • Recreating social norms within computational systems (chat interface, ppl using emojis to show emotion, etc)

Domain

  • Special needs: Prosthetics, communicating data to a blind person using sound.
  • Education: Making education challenging, but not due to bad interace. "Worst thing to do is make students busy worrying about the interface instead of the subject matter itself"
  • Healthcare: virtual reality, immersion therapy.
  • Security: Increasing the usability of security through HCI.
  • Games: good logical mapping between action & effect. Tight connection between task and interface.

2.1 Intro to Principles

Learning Goals:

  1. Focusing on the task (not tools)
  2. Role of the interface in mediating users and tasks
  3. The role of the user: Processor? Predictor? Participant?
  4. User experience at multiple levels

Tips for identifying a task

  1. Watch real users
  2. Talk to them (what is their goals/motives)
  3. Start small
  4. Abstract up - work from small observations, abstract up to an understanding of the tasks they're trying to complete.
  5. You are not your user

Usefulness and Usability

  • Useful: interface allows the user to achieve some task
  • Usability: More important, by understanding the task it allows you to create solutions beyond standard interfaces (e.g. paper maps vs navigation systems)
    • Reduce cognitive load: The total mental effort being used in working memory.

Views of the User: Processor

  • Humans as "processors": Take input in and give output out.
  • Interface must fit within human limits:
    • Think of what humans can sense, store in memory, and physically do in the world.
    • "Usability" equals an interface that is physically usable.
  • Interfaces are evaluated by quantitative experiments:
    • Numeric measurements on how quickly the user can complete a task, or react to a stimulus.
  • Less emphasis placed on this view

Views of the User: Predictor

  • Humans as "predictors": We want humans to predict what will happen in the world as a result of some action they take.
  • Interface must fit with their knowledge:
    • Help users learn what they don't know, and leverage what they already know.
  • Evaluated by qualitative studies:
    • "Ex situ" studies (in a controlled environment)
    • Task analyses, cognitive walk-throughs to understand the user's thought
  • Still focused on one user, one task.

Views of the User: Participant

  • Humans as "participant": Interface is interested in what's going on around the user (e.g. other tasks, other ppl they are interacting with).
  • Interface must fit with the context: Humans must be able to interact with the system in the context where they need it.
  • Evaluated by "in-situ" studies (studying interface + user within the real context of the task)

PPP Table

Views of User: Schools of Thought

  • Processor: From Behaviorist school. Systematic way to investigate behaviors in humans and other animals.
    • John B. Watson: Focus on observed behavior, not introspection.
    • Pavlov (dog), Skinner (operant conditiong, rats)
  • Predictor: From Cognitivism. We care about what the user is predicting, ie. thinking.
    • Is knowledge inborn, or through experience (Kant, Descartes)
    • Cognitive science (50s)
    • Chomsky, Carey, Minsky, Herbert Simon, etc.
  • Participant: Functionalism (Psychology), Systems (Psychology). More based from HCI.
    • Cares about environment of the user.
    • Edwin Hutchins, Lucy Suchman, etc.

Designing with Three Views

Test Case: Tesla interface screen

Processor model: Strictly observe user's behavior (e.g. timing)

  • Pros: May use existing data, enables objective comparisons (ie. text vs voice wrt speed),
  • Cons: Don't find reasons for differences. Can't differentiate by expertise (power user vs novice). Helps optimize but not redesign.

Predictor model:

  • Pros:
    • More complete picture of interaction: Ask users for input (e.g. interviews / focus groups, show prototypes and see what they think). Why users use different interface at different times (voice vs text).
    • Targets different levels of expertise: Ranging from power users to novice.
  • Cons:
    • Analysis may be expensive (going through transcripts, analysis takes time).
    • Analysis is subject to biases: Analyst can have bias in interpreting interview data.
    • Ignores broader interaction context: Only focuses on interface, not the real authentic environment in which they are using that interface.

Participant model:

  • Pros:
    • Evaluates interaction in context: Notice how users may get distracted.
    • Captures authentic user attention
  • Cons:
    • Expensive to perform and analyze
    • Requires real, functional interfaces (not prototypes). Hard to use this model when getting started with a new design task.
    • Subject to more uncontrollable variables.

Takeaway: We'll use all of these models at different times and in different contexts

We might start with a participant model where we just ride around with users watching what they do.

Based on that, we might observe that they spend a lot of time fumbling around to return to the same few locations.

So, then we might redesign an interface to include some kind of ‘bookmarking’ system, and present it to users in interviews.

There, they might tell us that they like the design, but further note that they don’t need a long list of bookmarks -- they really only need work and home.

Based on that, we might then design an interface where a simple swipe takes them to work or home. Then, we might test that with users to see how much more efficiently they’re able to start navigation when these kinds of shortcuts are provided.

The results of each design phase inform the next, and different phases call for different types of evaluation, which echo different models of the user.

Good Design, Bad Design

  • Good Design: A GPS system that warns you 20 seconds before you need to make a turn
  • Bad Design: A GPS system that warns you 2 seconds before you need to make a turn
  1. If you view the user just as a sensory processor, you might think that we need only alert the user a second before they need to turn: after all, human reaction time is less than a second.
  2. If you view the user as a predictor, you understand that they need time to slow the car down and make the turn, so they need a few more seconds to actually execute the action of turning after being alerted about the upcoming turn.
  3. And if you view the user as a participant, you understand that this is happening while they’re going 50 miles an hour down the road with a screaming toddler in the back seat trying to merge with a driver on a cell phone and another eating a cheeseburger.

Reflections: Views of the User

  • Bad processor model: Time tracking by manual entry, doesn't take realistic view of human's role in the system.
  • Good predictor model: Ed UI (interface to show upcoming lessons, playback bar), takes cognitive load off user so they can focus on learning.
  • Good participant model: Sleep tracking apps. It monitors my sleep cycles, rings at the optimal time, and tracks my sleep patterns to make recommendations.

User Experience - Sans Design

  • By my definition, user experience design is attempting to create systems that dictate how the user will experience them.
  • User experience on its own, however, is a phenomenon that emerges out of interactions between users and tasks via interfaces.
  • It goes beyond the simple interaction of the user with the interface to accomplish the task and touches on the emotional, personal, or more experiential elements of the relationship.
  • We can build this idea as an expanding understanding of the scope of what defines the ‘user experience’.

Design Challenge: Morgan on the Street

So, keeping in mind everything we’ve talked about, let’s design something for Morgan. Morgan walks to work. She likes to listen to audiobooks, mostly non-fiction. But she doesn’t just want to listen, she wants to be able to take notes and leave bookmarks as well. What would designing for her look like from the perspectives of viewing her as a processor, a predictor, and a participant?

  1. Processor: What is communicated, when and how.
    • Look at what information is communicated to Morgan, when and how.
  2. Predictor: How the interface meshes with Morgan's immediate needs.
    • Look at how the interface meshes with Morgan's needs with regard to this task: How easy it is to access, and how easy the commands are to perform.
  3. Participant: How the interface interacts with Morgan's life as a whole
    • Look at broader interactions between this interface and Morgan's other tasks and social activities. We might look at how increased access to books changes her life in other ways.

Conclusion

  1. Intefaces mediate between users and tasks.
  2. Usability: Efficiency and user satisfaction.
  3. 3 views of the user (processor, predictor, participant)
  4. UX at group / societal levels.
In [ ]:
 

2.2 Feedback Cycles

Feedback Cycles are Fundamental

Gulf of Execution

Gulf of execution: How hard is it to do in the interface what is necessary to accomplish those goals? What’s the difference between what the user thinks they should have to do, and what they actually have to do?

3 Components:

  1. Identify intentions - User must identify what their goal is in the context of the system.
  2. Identify Actions - User must be able to identify the actions necessary to accomplish their goals.
  3. Execute in Interface - User must be able to actually interface with the system to carry out the actions.

Example: Microwave

  1. identify intent - "microwave for one minute"
  2. identify action - "press heat, 1, 0, 0, start"
  3. execute - actually act on the sequence

5 Tips to Reduce Gulf of Execution

  1. Make functions discoverable - Ideally, the functions of the interface would be discoverable, meaning that they can find them, clearly labeled, within the interface.
  2. Let the user mess around - You want your user to poke around and discover things, so make them feel safe doing so. Don’t include any actions that can’t be undone. Avoid any buttons that can irreversibly ruin their document or setup. That way, the user will feel safe discovering things in your interface.
  3. Be consistent with other tools - We all want to try new things and innovate, but we can bridge gulfs of execution nicely by adopting the same standards many other tools use. Use Ctrl+C for Copy, Ctrl+V for paste. Using a diskette icon for ‘save’, even though no one has used floppy disks in years. This makes it easy for users to figure out what to do in your interface.
  4. Know your user - The gulf of execution has a number of components: identifying your intentions, identifying the actions to take, and taking the actions. For novice users, identifying their intentions and actions are most valuable, so making commands discoverable through things like menus is preferable. For experts, though, actually doing the action is more valuable. That’s why many experts prefer the command-line: although it lacks many usability principles targeted at novices, it’s very efficient.
  5. Feedforward - We’ve talked about feedback, which is a response to something the user did. Feedforward is more like feedback on what the user might want to do. It helps the user predict what the result of an action will be. For example, when you pull down on the Facebook newsfeed on your phone, it starts to show the refresh icon -- if you don’t finish pulling down, it doesn’t refresh. That’s feedforward: information on what will happen if you keep doing what you’re doing.

Gulf of Evaluation

Gulf of e valuation: How does the user becomes aware that their action succeeded.

3 Components:

  1. Interface output - What is actually displayed/communicated to the user.
  2. Interpretation - User needs to interpret the output to find out what it means for the system.
  3. Evaluation - User can evaluate whether the desired change occurred.

Example: Thermostat

  • interface output: heat is turned on, but lacks communication to user that heat is on. How to fix: display "heat" in UI.
  • heater might shut off for other reasons without indication - there is large gulf of evaluation
  • User has to do a lot to evaluate whether their actions did something.

5 Tips to Reduce Gulf of Evaluation

  1. Give feedback constantly - Don’t automatically wait for whatever the user did to be processed in the system before giving feedback. Give them feedback that input was received. Give them feedback on what input was received. Help the user understand where the system is in executing their action by giving feedback at every step of the process.
  2. Give feedback immediately - Let the user know they have been heard even when you’re not ready to give them a full response. If they tap an icon to open an app, there should be immediate feedback on that tap. That way, even if the app takes a while to open, the user knows that the phone recognized their input. That’s why icons briefly grey out when you tap them on your phone.
  3. Match the feedback to the action - Subtle actions should have subtle feedback, significant actions should have significant feedback.
  4. Vary your feedback - It’s often tempting to view our designs as existing solely on the screen, and so we want to give feedback on the screen. But the screen is where the interaction is taking place, so visual feedback can get in the way. Think about how auditory or haptic feedback can be used instead of relying just on visual feedback.
  5. Leverage direct manipulation - whenever possible, let the user feel like they’re directly manipulating things in the system. Things like dragging stuff around or pulling something larger or smaller are very intuitive actions because they feel like you’re interacting directly with the content.

Norman's Feedback Cycle Stages

7 questions to bridge the gulf of execution / evaluation:

  1. How easily can one determine the function of the device? This relates to the user’s goal: how easily can the user determine that the interface is capable of accomplishing their goal?
  2. How easily can one tell what actions are possible? This is important for the user to be able to construct their plan.
  3. How easily can one determine the mapping from intention to physical movement?
  4. How easily can one actually perform that physical movement?
  5. How easily can one tell what state the system is in?
  6. How easily can one tell if the system is in the desired state?
  7. How easily can one determine the mapping from system state to interpretation?

Norman also further articulates this by breaking the process into phases that span both execution and evaluation.

  1. The raw action and perception is referred to as visceral: this is the physical act of performing the plan, or the perceiving the outcome.
  2. The behavioral area is where we think about what steps to actually take or what we’re seeing from the interface.
  3. The reflective area is where we put it in the context of our goal: either translating a goal into a plan, or comparing the interpreted results to the original goal.

Tying it to KBAI:

Feedback Cycles - David's Car

  • Designing a system the way a user expects it to be designed is helps them across that gulf of execution (location of ignition)
  • Detecting engine on or electrical only - Throws alert telling you how to start the car. The output presented is easy to interpret, and the context in which it is given helps us evaluate pretty quickly.
  • So we have some trouble here with the gulf of execution, but the gulf of evaluation is still pretty short.
  • Areas of improvement:
    1. We know that the screen can show an alert that the brake needs to be depressed to turn the car on. Why not show that immediately after the car door opens when the car is off?
    2. Ignition sound - Differ the sound based on whether the car turned on, rather than using same sound.

Design Challenge - Credit Card Readers

What is the problem with the framing of the problem?

The right answer is: We shouldn't be thinking just about swiping or inserting a card, we should be thinking about the general purchasing process.

Conclusion

  1. We discussed feedback cycles’ incredible ubiquity in other fields and discussions.
  2. We talked about gulfs of execution, the distance between knowing what they want to accomplish and actually executing the steps to accomplish it.
  3. We talked about gulfs of evaluation, the distance between making some change in a system and evaluating whether or not the goal was accomplished.
  4. We introduced the seven questions we need to ask to bridge those gulfs.
In [ ]:
 

3.1 Intro to Methods

Lesson Goals

  • Students will understand the notion of user-centered design, especially as it contrasts with other design philosophies.
  • Students will understand the fundamental principles and approaches to user-centered design.
  • Students will understand the design life cycle.
  • Students will understand the goal of the unit.

Lesson Outcomes

  • Students will be able to describe the phases of the design life cycle.
  • Students will be able to describe the tenets of user-centered design.
  • Students will be able to identify qualitative vs. quantitative data and describe the value of each.

Assessments

  • Students will reflect on the application of the lesson’s concepts to their chosen area of HCI.
  • Students will engage in a short design task based on the lesson’s concepts.
  • Students will complete a short answer assignment in which they critique a provided interface from the perspective of the lesson’s concepts.
  • Students will complete a short answer assignment in which they select an interface to critique from the perspective of the lesson’s concepts.
  • Students will complete a short answer assignment in which they design a revision of one of the critiqued interfaces from the perspective of the lesson’s concepts.

User-centered Design

"In order to design interactions that are better than existing designs, it is important to take into consideration the user’s needs at every stage of the design process."

  • Definition: Design that considers the needs of the user throughout the entire design process.
  • What it does:
    1. Examine the user’s needs in depth, both by observing them and by asking them direct questions.
    2. After we start designing, we need to present our design alternatives and prototypes to the user to get feedback.
    3. When we near a design, we need to evaluate the quality of the design with real users.
  • Pitfalls of bad design:
    1. Design to meet functional requirements instead of real needs
    2. False assumption that designers knows the needs of user

Principles of User-Centered Design

ISO - Six principles to follow whwne pursuing user-centered design

  1. The design is based upon an explicit understanding of users, tasks and environments.
    • This means that we must gather information about the users, the tasks they perform, and where they perform those tasks, and leverage that knowledge throughout the design process.
  2. Users are involved throughout design and development.
    • Involvement can take on many forms, from regularly participating in interviews and surveys about designs and prototypes to actually working on the design team alongside the designers.
  3. The design is driven and refined by user-centered evaluation.
    • We absolutely must have real users evaluating the prototypes and interfaces we assemble.
  4. The process is iterative.
    • No tool is developed once, released, and then abandoned. Designs undergo constant iteration and improvement, even after being released.
  5. The design addresses the whole user experience.
    • Many designers are tempted to delineate a certain portion of the experience as their primary interest, but we must address the entire user experience.
  6. The design team includes multidisciplinary skills and perspectives.
    • Good teams for pursuing user-centered design include people with a number of different backgrounds, including psychologists, designers, computer scientists, domain experts, and more.

Stakeholders

"User-centered design isn’t just about catering to the user in the middle, but also in looking at the impact of our design on all the affected stakeholders."

  1. User - the person who uses the interface that we create
  2. Secondary - secondary stakeholders doesn't directly use interface but might interact with the output of it.
  3. Tertiary - people who never interact with the tool OR its output, but nonetheless impacted by the tool.

Examples:

  • Gradebook tool: User = Teacher (uses gradebook), Secondary = Parents (receives gradebook), Tertiary = Students (affected by grade)
  • Thought: How does parents having more consistent access to student grade information affect students? Might foster more involvement, but also could lead to helicopter parenting.

Reference: “The Inmate Are Running the Asylum” by Alan Cooper

  • Compares technology to a dancing bear at a circus. He notes that people marvel at a dancing bear not because it’s good at dancing, but because it dances at all.
  • Engineers shouldn't be UI designers

The Design Life Cycle

  1. Needfinding - Gather a comprehensive understanding of the task that users are trying to perform. Includes "who is the user", "what is the context of the task", "why are they doing the task".
  2. Design Alternatives - Very early ideas on different ways to approach the task. Emphasis on multiple designs to avoid fixating on one idea.
  3. Prototyping - Take ideas with most potential, build them into prototypes that can be put in front of the user.
  4. User Evaluation - Take ideas and put them in front of users. Get feedback... and go back to step 1

Design Life Cycles meet Feedback Cycles

"In HCI, we’re designing interfaces to accomplish goals, and then based on the output of our evaluations with those interfaces, we judge whether or not the goals of the interface were accomplished. Then, we repeat and continue."

"In many ways, we’re doing the same things that our users are doing: trying to understand how to accomplish a task in an interface. "

Qualitative vs Quantitative Data

Quantitative Data: observations described or summarized numerically. Quantitative data involves anything numeric.

  • Quantitative data supports formal tests, comparisons, and conclusions.
  • Only captures a narrow view of what we might be interested in examining.
  • Strong for measuring a small class of data points.

Qualitative Data: observations described or summarized non-numerically.

  • Includes natural language (surveys, natural response, reports)
  • Much broader and more general picture of what we’re examining.
  • More prone to biases.
  • Harder to generate formal conclusions based on qualitative data.
  • In some circumstances, we can convert qualitative data into quantitative data. (Convert free response to quant data by coding, turn multiple choice into nominal data)

Uses:

  • Needfinding - focus on qualitative descriptions of tasks or experiences
  • Prototyping - start focusing on quantitative, numeric improvements.
  • Using both is called mixed method.

In [ ]:
 

3.2 Ethics and Human Research

Origin of Institutional Review Board (IRB)

Infamous studies:

  1. Tuskegee Syphilis study, where rural African-American men were injected with syphilis to study its progression.
  2. Milgram obedience experiment, where participants were tricked into thinking they had administered lethal shocks to other participants to see how obedient they would be.
  3. Stanford Prison Experiment, where participants were psychologically abused to test their limits.

Response:

  1. National Research Act of 1974 - led to the creation of Institutional Review Boards to oversee research at universities
  2. Belmont Report - summarizes basic ethical principles that research must follow.
    • benefits to society outweigh the risks to the subjects.
    • subjects be selected fairly (direct response to the Tuskegee syphilis study)
    • demanded rigorous informed consent procedures
    • positive results of research outweigh the negatives and that participant rights are always preserved

The Value of Research Ethics

  • IRB’s main task is to make sure the potential benefits of a study are worth the potential risk. Need to make sure potential of benefit is higher than risk.
  • Ensure data we gather is useful.
  • Issues of coercion: When participants feel coerced to participate in research, the data they actually supply may be skewed by that negative perception, which impacts our data.
  • Issues of bias: We might demand too much from participants, or ask questions that are known to affect our results.

IRB Protocols

See document link below for explanation of:

  • Basics
  • Human Subject Interfaction
  • Consent Procedures

https://docs.google.com/document/d/1e3BbJMHNABxvss1i1bXroJY_CzzqQrEe7vud5s_yfVE/edit#heading=h.weyas98hw358

Research Ethics and Industry

  • Industry constantly runs experiments on users.
  • Facebook experiment, "Experimental evidence of massive-scale emotional contagion through social networks". Would have never passed IRB, because manipulated user's mood for experimentation and did not provide informed consent.
  • Was it ethical?
    • Yes - Facebook did have an internal IRB. Term of service covered it.
    • No - Users couldn't opt out. Users not aware,

Paper spotlight - Evolving the IRB: Building Robust Review for Industry Research

  • Industry only governs itself, paper proposes new standards for industry
  • Facebook refer studies to external reviewer and/or external IRB.
In [ ]:
 

3.3 Needfinding and Requirements Gathering

Data Inventory

Before we start our needfinding exercises, we also want to enter with some understanding of what data we want to gather.

  1. Who are the users? What are their ages, genders, levels of expertise?
  2. Where are the users? What is the environment?
  3. What is the context of the task? What else is competing for users’ attention?
  4. What are their goals? What are they trying to accomplish?
  5. Right now, what do they need? What are the physical objects? What information do they need? What collaborators do they need?
  6. What are their tasks? What are they doing physically, cognitively, socially?
  7. What are the subtasks? How do they accomplish those tasks?

Problem Space

In order to do some real needfinding, the first thing we need to do is identify the problem space.

  1. Where is the task occurring?
  2. What else is going on?
  3. What are the user’s explicit and implicit needs?
  • As we’re going about needfinding, we want to make sure we’re taking the broad approach: understanding the entire problem space in which we’re interested, not just focusing narrowly on the user’s interactions with an interface.
  • So, in our exploration of methods for needfinding, we’re going to start with the most authentic types of general observation, then move through progressively more targeted types of needfinding.

User Types

Significance: We want to understand who we’re designing for.

  • Account for all types of user who we're designing the product for.

Audiobook example:

  1. For kids, and/or adults
  2. Experts at exercising, and/or novices
  3. Experts at listening to audiobooks, and/or novices?

"Differentiate whether I’m designing for business people who want to be able to exercise while reading, or exercisers who want something else to do while exercising."

  • identify these different types of users, and perform needfinding exercises on all of them.
  • Reference: Doing Cultural Studies by Hugh Mackay and Linda Janes.

Avoiding Bias in Needfinding

  1. Confirmation bias. Confirmation bias is the phenomenon where we see what we want to see. We enter with some preconceived ideas of what we’ll see, and we only notice those things that confirm our prior beliefs.
    • Try to avoid this by specifically looking for signs that you’re wrong, by testing your beliefs empirically, and by involving multiple individuals in needfinding.
  2. Observer bias. When we’re interacting directly with users, we may subconsciously bias them, e.g. being more helpful to those using the interface as intended, rather than competitor's design.
    • Try to avoid this by separating experimenters with motives from the participants, by heavily scripting interactions with users, and by having someone else review your interview scripts or surveys for leading questions.
  3. Social desirability bias. If you’re testing an interface and the participants know you were the designer, they’ll want to say nice things about it to make you happy.
    • Try to avoid this by conducting more natural observations / recording objective data. Stay out of participant's way
  4. Voluntary response bias. People with stronger opinions are more likely to respond to optional surveys. Risk oversampling extreme views.
    • Try to avoid this by limiting how much of the survey content is shown to users before they begin the survey, and by confirming any conclusions with other methods.
  5. Recall bias. Particpants forget what they did, how they felt. Leads to misleading data.
    • Try to avoid this by studying tasks in context by having users think aloud during activities s or conducting interviews during the activity itself.

Naturalistic Observation

Definition: Fly on the wall approach. Note down what people are doing, and let that guide the design.

  • How: Note specific observations, then generalize to abstract tasks. This avoids confirmation bias. Think about what they're doing, and how it would affect how they'd want to interact with the design.
  • Cons: Ethically constrained, can't use PII data. Can't know what people are thinking.

5 Tips for Naturalistic Observation

  1. Take notes. Don’t just sit around watching for a while; be prepared to gather targeted information and observations about what you see.
  2. Start specific, then abstract. Write down the individual little actions you see people doing before trying to interpret or summarize them. If you jump to summarizing too soon, you risk tunnel vision.
  3. Spread out your sessions. Rather than sitting somewhere for two hours one day and moving on, try to observe in shorter 10-15 minute sessions several times. You may find interesting different information, and your growing understanding and reflection on past exercises will help your future sessions.
  4. Find a partner. Observe together with someone else. Take your own notes, then compare them later so you can see if y’all interpreted the same scenarios or actions the same way.
  5. Look for questions. Naturalistic observation should inform the questions you decide to ask participants in more targeted needfinding exercises. You don’t need to have all the answers based on observation alone: what you need is questions to investigate further.

Participant Observation

Definition: Be a participant in your own study.

  • Warning: Be sure not to over-index on your personal observations (you're not your user).
  • Use these experiences to inform what you ask users going forward.

Hacks and Workarounds

Significance: Look at hacks users employ.

  • How do they user UI in non-intended ways
  • How do they break out of an interface to accomplish a task that could be accomplished with the interface (e.g. post-it notes in desk setting. They're still useful).
  • Don't assume you understand why, ask them.
  • Uncover errors: Users use hacks to get around these. Good sign to fix it.

Errors:

  • Can use them to understand more about user's mental model
  • Errors vs mistakes: Errors implies there's nothing wrong with user's mental model of how the UI works, problem is they can easily forget current state.
    • Mistakes: Mental model is weak and more prone to mistakes (e.g. navigating in Mac when accustomed to PC)

Apprenticeship and Ethnography

Significance: Use ethnography (living close to users you're studying) to understand domain knowledge necessary to design new interface / improve the user task.

  • Why: Sometimes, no amount of observation suffices to get full understanding of the way the task works.

Interviews and Focus Groups

  • Interviews: 1:1, Focus Groups: Group conversation.
  • Focus groups run the risk of "overly convergent thinking" (all agreeing with each other).

5 tips for better interviews

  1. Focus on the six W’s in writing questions: Who, what, where, when, why, and how? Try to avoid questions that lend themselves to one-word or yes-or-no answers: those are better gathered via surveys. Use your interview questions to ask open-ended, semi-structured questions.
  2. Be aware of bias: Look at how you’re phrasing your questions and interactions and make sure you’re not predisposing the participant to certain views. If you only smile when they say what you want them to say, for example, you risk biasing them to agree with you.
  3. Listen: Many novice interviewers get caught up in having a conversation with the participant rather than just gathering data from the participant. Make sure the participant is doing the vast majority of the talking, and don’t reveal anything that might predispose them to agree with you.
  4. Organize the interview: Make sure to have an introduction phase, some lighter questions to start to build trust, and a summary at the end so the user understands the purpose of the questions. Be ready to push the interview forward or pull it back on track.
  5. Practice!: Practice your questions on your friends, family, or research partners in advance. Rehearse the entire interview. Gathering subjects is tough, so when you actually have them, make sure you’re ready to get the most out of them.

Examples of good / bad interview Qs

  • Bad: "Do you exercise", Good "How often do you exercise". Latter is more open-ended.
  • Bad: "Do you e xercise for A or B", Good "Why do you exercise" - Don't present dichotomy / yes or no questions.
  • Good: "What, if anything, do you listen to while exercising"
  • Bad: "What smartphone do you use to listen to something while exercising" Good: "What device do you listen to while..." - Former assumes you use a smartphone.
  • Bad: "We're developing in app for audiobooks while exercising... are you interested" - Introduces "social desirability bias", no one wants to say they don't want to join. Good: "Would you be interested in an app for audiobooks while exercising"

Think-Aloud

Definition: Ask users to talk about their perceptions of the task in the context of the task (while they're doing it).

  • Pros: Can get at user's thoughts that they forget if asked afterwards.
  • Cons: Change user's viewpoint while doing task - they might approach it more deliberately than they would in real life.
    • Workaround: "Post-event protocol" - we wait to get the user's though until immediately after the activity.

Surveys

5 Tips of Good Surveys:

  1. Less is more. The biggest mistake novice survey designers make is to ask way too much. That affects the response rate and reliability of your data. Ask the minimum number of questions necessary to get the data that you need, and only ask questions you know you’ll use.
  2. Be aware of bias. Look at how you’re phrasing the questions: are there positive or negative connotations? Are participants implicitly pressured to answer one way or the other?
  3. Tie them to the inventory. Make sure every question on your survey connects to some of the data that you want to gather. Start with the goals for the survey and write the questions from there.
  4. Test it out! Before sending it to real participants, have your coworkers or colleagues test out your survey. Pretend they’re real users, and see if you would get the data you need from their responses.
  5. Iterate! Survey design is like interface design. Test out your survey, see what works and what doesn’t, and revise it accordingly. Give participants a chance to give feedback on the survey itself so that you can improve it for future iterations.

Writing Good Survey Questions

  1. Be Clear
    • Make sure the user actually understands what we’re asking about. We want them to have clear foundation for answering the question.
    • If we’re using a numeric scale, we want to provide labels that explain what the scale means (e.g. "1 - Highly Dissatisfied")
    • If we’re providing ranges, we want to avoid overlapping ranges so the user isn’t confused about what to select.
    • Timebox the question (e.g. "In past seven days, how many times have you exercised?".
  2. Be Concise - Use plain language
  3. Be Specific
    • Break questions down into smaller, specific questions that get at a big idea.
    • Avoid "doubled barrel" questions, ie asking about two things at once. (e.g. "How satisfied are you with the speed AND availability of your mobile connection")
    • Avoid questions where the user could have conflicting ideas at the same time: if they could have conflicting ideas about different parts, break that into multiple short questions. ("How satisifed were you with your food" vs "How satisfied were you with the temperature of your food")
  4. Be Expressive - Allow the user to be expressive.
    • Emphasize in the question prompt that they’ll be providing an opinion -- this makes the user more comfortable giving their thoughts. ("Is your subscription price too high" vs "Do you feel your subscription price is too high, too low, just right")
    • When providing opinion ranges like ‘agree’ vs. ‘disagree’, always provide an odd number of options so that the user can respond neutrally, and at least 5 options so users feel more comfortable differentiating their level of agreement.
    • When asking a multiple-choice question, if there’s a chance a user could have more than one thought, let them choose more than one (checkbox)
    • Avoid binary questions
  5. Be Unbiased - How?
    • Giving an ‘other’ option limits bias towards your pre-selected options.
    • Watch for leading questions.
    • Watch for loaded questions.
    • Be careful with closed-ended questions: provide an ‘Other’ option.
    • Avoid leading questions ("Did our brand-new AI-based interface generate better recommendations?")
    • Avoid loaded questions ("How much time have you wasted on social media").
  6. Be Usable - Use HCI principles in designing the survey itself.
    • Provide a progress bar so that the user can evaluate how far they are into the survey.
    • Make the pages approximately consistent lengths so that the user has an accurate gauge for what it means to be on page 3 of 5 or something similar.
    • Order your questions logically: group questions about demographics, questions about prior experience, questions about future desires, etc. such that they follow a natural flow.
    • Alert users when questions are unanswered, but don’t require them to be answered: some users will feel uncomfortable answering some questions, so it’s good to leave them unrequired, but you also want to avoid users unknowingly skipping. So, tell them if they’ve skipped, but don’t force them to answer.
    • And finally, preview the survey yourself. Your users might not tell you if there are errors, so make sure to take it yourself.

Exercise: Bad Survey

Other Data Gathering Methods

  1. Existing UI evaluation - Critique interfaces that already exist using some of the evaluation methods we cover in the Evaluation lesson.
    • E.g. if you wanted to design a new system for ordering take-out food, you might evaluate the interfaces of calling in an order, ordering via mobile phone, or ordering via a web site
  2. Product Reviews - See what people already like/dislike about existing products.
  3. Data Logs - Get logs of user interaction that have already been generated
    • For example, say you wanted to build a browser that’s better at anticipating what the user will want to open next. You could grab data logs and look for trends both within and across users

Exercise: Needfinding Pros / Cons

Design Challenge: Needfinding for Book Reading

  • Finding participants: Maybe go to library, find participants.
  • Deciding the users: Do we want to narrow down to subset, or all book readers?

Iterative Needfinding

  • Observation inform interviews / surveys (why do exerciser only use one earbuds?)
  • Evaluation also feeds into needfinding (what worked / didn't work). What follow up data do we need to improve design.

Revisiting the Inventory

  • During needfinding, you'll gather a lot of data.
  • Pay attention to conflicting data - Are these cases where the designer understands the elements of task design that users don't, or whether your expertise hasn't developed to the point of understanding the task.
  • Once you go through data gathering process, go back to the "Data Inventory" tenets to make sure you've answered those questions.

Representing the Need

  1. Step-by-step task outline
  2. Hierarchical network - develop task outline into hierarchy, more complex than DAGs.
  3. Structural diagram - Augment with diagram of structural relationships amongst elements in the system and illustrate how they interact.
  4. Flowchart - Introduce decision-making points, or points of interruptions.
  • Notice how these representations are very similar to the outcomes of the task analyses we talk about in the principles unit of our conversations.
  • We can similarly use the data gathered from here to summarize a more comprehensive task analysis that will be useful in designing and prototyping our designs.

Defining the Requirements

Definition: Needs that our final interface must meet.

  • Specific and evaluate-able.
  • Can include components outside th user's tasks

User data requirements:

  1. Functionality: what the interface can actually do.
  2. Usability: how certain user interactions must work.
  3. Learnability: how fast the user can start to use the interface.
  4. Accessibility: who can use the interface.

External Requirements:

  1. Compatibility: what devices the interface can run on.
  2. Compliance: how the interface protects user privacy.
  3. Cost: how much the final tool will cost.
In [ ]:
 

2.3 Direct Manipulation

Definition: Direct manipulation is the principle that the user should feel as much as possible like they’re directly controlling the object of their task.

Invisible Interface: When the interface actually disappears . Users spends no time thinking about how to engage with the interface, all their time is dedicated to thinking about the task they're performing.

  • e.g. Desktop interface (terminal vs finder), latter doesn't need a lot of prior knowledge compared to terminal.

Paper Spotlight: "Direct Manipulation Interfaces"

Hutchins, Edwin & Hollan, James & Norman, Donald. (1985). Direct Manipulation Interfaces. Human-computer Interaction. 1. 311-338. 10.1207/s15327051hci0104_2. https://www.lri.fr/~mbl/ENS/FONDIHM/2013/papers/Hutchins-HCI-85.pdf

Significance:

  • Hutchins authored the foundational paper for distributed cognition, and Norman created one of the most accepted sets of design principles.
  • Two aspects of directness, Distance and Direct Engagement

Distance: Semantic vs Articulatory Distance

Definition: distance between the user’s goals and the system itself. Encompasses gulf of execution/evaluation.

"...the feeling of directness is inversely proportional to the amount of cognitive effort it takes to manipulate and evaluate a system”."

  1. Semantic distance - Difference between the user’s goals and their expression in the system. ie, how hard it is to know what to do.
    • Captures "identify actions" and "identify intentions" aspect of gulf of execution.
  2. Articulatory distance - Distance between that expression and its execution. ie, how hard it is to actually do what you know to do.
    • Captures "execute" actions phase of gulf of execution.

"The user starts with some goals, translates them into their form of expression in the interface, and executes that expression. The system then returns some output in some form of expression, which is translated by the user into their understanding of the new state of the system."

Direct Engagement

Definition: Providing the user the feeling that they are directly controlling the objects.

"The systems that best exemplify direct manipulation all give the qualitative feeling that one is directly engaged with control of the objects--not with the programs, not with the computer, but with the semantic objects of our goals and intentions."

Examples:

  1. If we’re moving files, we should be physically moving the representation of the files.
  2. If we’re playing a game, we should be directly controlling our characters.
  3. If we’re navigating channels, we should be specifically selecting clear representations of the channels we want.

Exploring HCI: Direct Manipulation & VR

  • VR is way towards direct engagement through gesture interfaces.
  • But often feedback is lacking, how to best give the right feedback?

Exercise: Direct Manipulation

Apple touchpad actions, which are direct engagements:

  1. Pressing down to click - yes
  2. Pressing two fingers down to right-click - no
  3. Dragging two fingers up and down to scroll - yes
  4. Double-tap to zoom in and out - no
  5. Pinching to zoom in and out - yes

Making Indirect Manipulation Direct

Significance: "Direct manipulation isn’t just about designing interactions that feel like you’re directly manipulating the interface. It’s also about designing interfaces that lend themselves to interactions that feel direct."

  • Notification center (swipe to left from right side) - no meaning behind why it is on the right, but the action makes it feel direct.
  • Choice of animation can make an indirect manipulation feel more direct (five finger clearing the screen on Mac touchpad). Similar with launchpad.

Invisible Interfaces

Example: Stylus vs mouse - stylus makes the gulf much narrower to the point of the interface becoming invisible.

Good vs Bad Design of "invisible-ness"

  • Good: Interfaces that are metaphorically invisible
  • Bad: Interfaces that are literally invisible.
    • Gesture-based interfaces are literally invisible, so we need to provide really good feedback to give sense of success of a gesture.

Invisibility by Learning

Significance: Interfaces become invisible not just through great design, but also through users learning to use them.

  • Just because the interface has become invisible doesn’t mean it’s a great interface. We cannot expect users to spend a lot of time trying to understand the interface.

Invisibiility by Design

Goal: Users should feel immediately as if they’re interacting with the task underlying the interface.

  • North star, not often met.

5 Tips fo Invisible Interfaces

  1. Use affordances - Affordances are places where the visual design of the interface suggests how it is to be used.
    • Buttons are for pressing, dials are for turning, switches are for flicking. Use these expectations to make your interface more usable.
  2. Know your user - Invisibility means different things to different people. Invisibility to a novice means that all the interactions is natural, but invisibility to an expert means maximizing efficiency.
  3. Differentiate your user - If serving multiple user types, provide multiple ways of accomplishing tasks.
    • "copy"/"paste" under Edit menu for novices, but also Ctrl+C / V for experts.
  4. Let your interface teach - Teach via design rather than manuals.
  5. Talk to your user - Ask them what they’re thinking while they use an interface, and check whether they're talking about the task or the interface. If they're talking about the interface, the design is visible.

Design Challenge: The Universal Remote

Challenge: How would we design an invisible interface for universal remote control, one that doesn’t have the learning curves that most have?

Takeaway:

  • Voice interfaces - Challenge is how to exploit the underlying knowledge base of the user (ie. content, media type)
In [ ]:
 

2.4 Human Abilities

Information processing model:

  1. Input (Perception) - How stimuli are sensed from the world and perceived in the mind.
  2. Processing (Cognition) - How the brain stores and reasons over the input it’s received.
  3. Output (Response) - How the brain then controls the individual’s actions in the world.

Perception

Visual

  1. The center of eye is most useful for focusing closely on color or tracking movement.
  2. Peripheral vision is good for motion detection, but not for color or detail.
  3. Women are less likely (1 / 200) to be color blind compared to men (1 / 12). Thus avoid relying on color to understand an interface.
  4. Sight is directional - easy to miss feedback
  5. Visual acuity decreases with age - be flexible to visual needs depending on age group.

Auditory

  1. Humans can discern noises based on pitch / loudness
  2. Good at localizing sound (near / far away)
  3. Can't filter out auditory information as easily as visual. Can lead to feeling of being overwhelmed

Haptic

  1. Feel different types of input: pressure, vibration, temperature.
  2. Can't easily filter touch feedback.
  3. Unlike listening, touch feedback is only available to the person it is touching, so can provide more personal feedback.
  4. Traditionally more natural (keyboard), more difficult with touchscreens.

Design Challenge - Message Alerts

Q: How to alert someone when they receive a text message, without disturbing others.

Solutions: Smartphones have cameras and light sensors - use that to determine where the phone is to determine what type of alert to use. (This could lead to a lot of surprise though).

Memory

3 kinds of memory:

  1. Perceptual Store
  2. Short Term Memory
  3. Long Term Memory

Perceptual Store

Definition: very short term, less than a second.

Baddeley & Hitch's model of working memory:

  1. Visuospatial sketchpad - holds visual information for active manipulation.
  2. Phonological loop (aka articulatory loop, phonological store) - Verbal / auditory informations. Stores sounds / speech you've heard recently.
  3. Episodic buffer - Integrates info from other systems, as well as chronological ordering of information.
  4. Central executive - Responsible for coordinating these various systems.

Short Term Memory

Definition: Capacity for holding a small amount of information in an active, readily available state for a short interval.

"Chunking" - bits of short-term memory. We can only hold 4-5 chunks at a time.

Takeaways:

  • Words are easier to remember than random letters because of memory. Easier to remember phone numbers by "chunking" list of numbers.
  • Identification is easier than recall - thus minimize memory load on the user by relying more on their ability to recognize things than to recall them.

Long Term Memory

Definition: Seemingly unlimited store of memories. But harder to put something in there. Generally need to put it into short-term memory several times.

Leitner system: A way of memorizing key-value pairs (ie. flashcards).

  • Things I don’t remember this time get moved back to the left, any that I do remember stay on the right. Repeat.
  • Things that I remember least are loaded into short-term memory most often, solidifying them in my long-term memory.

Cognition

Learning

"When we design interfaces, we are in some ways hoping the user has to learn as little as possible to find the interface useful. "

2 Kinds of Learning:

  1. Procedural Learning - Learning by doing (e.g. playing an instrument). Mainly what is covered in HCI.
    • Unconscious competence - "(When you have strong procedural knowledge), it can be difficult to explain to someone who lacks that competence because you aren’t sure what makes you good at it."
    • This leads to experts designing interfaces that are hard for others to use.
  2. Declarative Learning - Learning about something (e.g. association of concepts).

Cognitive Load

Definition: The amount of working memory resources used.

2 major implications on designing interfaces:

  1. Reduce the cognitive load posed by the interface so the user can dedicate more of their resources to the task itself.
  2. Understand, for our context, what other tasks are competing for cognitive resources.
    • e.g. Driving GPS - be aware that user have low cognitive resource to devote to interacting with the interface

Example: Programming

  • High cognitive load. Lots of short term memory required to remember syntax, variables, et cetera.
  • IDEs can mitigate these issues through error / type checking.
  • Distributed cognition - "Distributing the cognitive load more evenly between components of the system: myself and the computer"

5 Tips to Reduce Cognitive Load

  1. Multiple modalities - Describe things verbally, but also present visually to prevent over-indexing on one.
  2. Let modalities complement each other - Don't present different content in multiple modalities. Make it complementary.
  3. Give the user control of the pace - Time-based events stresses users out. Let them control the pace.
  4. Emphasize essential content and minimize clutter - Emphasize the most common actions while still giving access to the full range of possible options. Don't just show all of them in a flat context.
  5. Offload tasks - e.g. if a user needs to remember something they entered on a preview screen, show them what they entered. If there’s a task they need to do manually that can be triggered automatically, trigger it automatically.

Motor System

Significance: In designing interfaces, we’re also interested in what is physically possible for users to do. Includes how fast / precise they can take an action (e.g. tapping).

Example: Spotify control widget

  • On the left is the version that sites in the tray at the top of the screen, on the right is the version on the lock screen.
  • 'X' button closes, which is consistent with other applications.
  • '+' sign is not consistent, unclear what it does.
  • Precision of tapping on right is much lower, because buttons are closer together. Leads to error in the motor system because the designer didn't consider this into their design.
  • We need to make our interface tolerant of errors.
  • In this case, make user double-tap to actually close the app, or add confirmation action.
In [ ]:
 

3.4 Design Alternatives

Two big mistakes:

  1. Designing an interface without needfinding.
  2. Settling on a single design idea or genre too early. Manifested through:
    • Staying too allegiant to existing designs
    • Focusing too strongly on one alternative from the beginning.

Design Space

Definition: The area in which we design our interfaces.

  • Goal: Explore the possible design space, beyond traditional forms.
  • e.g. home security - initial design space might be wall mount display.
  • Design space may expand over time.

Individual Brainstorming

Goal: Generate lots of ideas

  • Research shows it is best to individually brainstorm before sharing with a group.
  • Don't reject any ideas.

5 Tips for Effective Individual Brainstorming

  1. Write down the core problem - Remain grounded on the core problem.
  2. Constrain yourself - Decide that you want at least one idea in a number of different categories (touch, voice, haptic, etc).
  3. Aim for 20 - Don’t stop until you have 20 ideas. This forces you to think through the problem rather than getting tunnel vision on an early idea.
  4. Take a break - You don’t need to come up with all of these at once. Stop brainstorming and decide to continue a couple days later, but be ready to write down new ideas that come to you.
  5. Divide and conquer - If you’re dealing with a big problem, divide it into smaller problems and brainstorm solutions for those.

Group Brainstorming

4 behaviors in group brainstorming that can block progress (Thompson, 2008) + 1 more:

  1. Social loafing - tendency to exert less effort working in groups than working alone.
  2. Conformity - Groups lead to convergent thinking.
  3. Production Blocking - Some individuals tend to dominate conversation and block others' participation. Not based on quality of idea, but volume of their opinion.
  4. Performance Matching - Groups tend to converge in passion/performance, leads to loss of momentum over time.
  5. Power Dynamics - Tendency to defer to seniors.

Takeaway:

"We should enter into group brainstorming with strong ideas of how to address these issues, ideally after a phase of individual brainstorming has occurred."

Rules for Group Brainstorming

Osborn, 1957:

  1. Expressiveness - Any idea that comes to mind, share it out loud, no matter how strange.
  2. Nonevaluation - No criticizing ideas. No evaluating ideas yet.
  3. Quantity - Brainstorm as many as possible. The more you have, the greater your chance of finding a novel one.
  4. Building - While you shouldn’t criticize others’ ideas, you should absolutely try to build on them.

Oxley, Dzindolet, and Paulus, 1996:

  1. Stay focused - Keep the goal in mind at all times.
  2. No explaining ideas - Say the idea and move on. No justifying ideas.
  3. When you hit a roadblock, revisit the problem - Say it again outloud.
  4. Encourage others - If someone isn’t speaking up, encourage them to do so.

Takeaway:

"Note that all eight of these rules prescribe what individuals should do, but they’re only effective if every individual does them. So, it’s good to cover these rules, post them publicly, and call one another on breaking from them."

5 Tips to Designing a Good Brainstorming Activity

  1. Go through every individual idea
    • Have participants perform individual brainstorming ahead of time and bring ideas to the group brainstorming session
    • Explicitly make sure to go through each one. That will help avoid converging around an idea too early.
  2. Find the optimal size
    • Social loafing occurs when there’s a lack of individual responsibility. When you have so many people that not everyone would get to talk anyway, it’s easy for disengagement to occur.
    • 5 people per group is optimal.
  3. Set clear rules for communication
    • Get a 20 second hour glass or timer, and when someone starts talking, start it -- once the timer is up, someone else gets to speak.
    • The goal is to ensure no one can block others’ ideas, whether intentionally or accidentally.
  4. Set clear expectations
    • Enthusiasm starts to wane when people are unsure how long a session will go or what will mark its end.
    • You might set the session to go a certain amount of time, or dictate a certain number of ideas get generated: no matter how you do it, make sure people can assess where in the brainstorming session they are.
  5. End with ideas, not decisions
    • It’s tempting to want to leave a brainstorming session with a single idea on which to move forward. That’s not the goal, though.
    • Your brainstorming session should end with several ideas. Then, let them ruminate in everyone’s minds before coming back and choosing the ideas to pursue.

Fleshing Out Ideas

Goal: Reduce ideas down to 3-4 ideas that are worht prototyping.

Personas

Definition: Create actual characters surrounding the user.

  • Understand motivations (why?), environment (where?), et cetera.
  • Create at least 3-4 of these personas for diverse stakeholders.
  • Revisit design alternatives with these personas and see if it'll work for them.

User Profiles

Definition: Large number of different variables about users, and list out possibilities for each. More demographic.

  • Don't design to cater for everybody
  • Don't design without a specific audience in mind

Examples:

  • novice vs expert exercisers.
  • serious vs casual readers.
  • high vs low motivation users.
  • tech literate vs tech illiterate individuals.
  • users that use the interface rarely vs often.

Timelines (aka journey maps)

Definition: Take personas and stretch it over timeline of the task in which we're interested.

  • What prompts this persona to start engaging in the task?
  • What actions lead up to starting the task?
  • How are they feeling at every stage of the task?
  • How would each design alternative impact their experience throughout this process?
    • If users hestiate to engage in exercise app, then make it more cheerful and motivational.

Scenarios / Storyboards

Definition: Examine specific scenarios users may encounter while using the interface.

  • Why? Discuss specific kinds of interactions and events we want to handle
  • More specific than timelines.
  • "Video mockup" - show what someone would actually look like interacting with a certain interface.

Scenarios for audiobook:

  • Noisy environment - Rules out speech recognition.
  • Lifting weights - Rules out gestures

User Modeling

Definition: Creating an interaction model of the user and their goals.

  • Incorporate GOMS (Goals, Operators, Methods, and Selection Rules) and cognitive task analysis to design alternatives.
  • More analytical than personas, which gives empathetic view of user's experience.
  • UM is more analytical, with measurable views of UX.

Checkout model example: Given alternatives, what are its efficiencies / speed associated with it? Can measure how efficient one design is compared to another.

Design Altnerative Pros / Cons

Exploring Ideas

Applying learnings:

  • Rule out infeasible ideas.
  • Create timelines covering sequence of events in performing the task.
  • Create user personas to explore cognitive load of users, understand competing tasks. Make sure to cover all types of users.
  • Come up with 3-4 design prototypes based on these explorations.
In [ ]:
 

2.5 Design Principles and Heuristics

Main references:

  1. Norman’s book “Design of Everyday Things” - general usability
  2. Nielsen’s book “Usability Engineering” - 10 usability heuristics.
  3. Constantine & Lockwood’s “Software for Use: A Practical Guide to the Essential Models and Methods of Usage-Centered Design” - Usage-centered design
  4. Mace Universal Design Institute (NC State) - universal interface design

TODO: Read more on nuances of perceptibility, tolerance and feedback between these authors

Design Principles

Condense from above reference

1. Discoverability

Definition: Relevant functions should be made visible so that the user can discover them, as opposed to having to read about them in the documentation or learn them through a tutorial.

Examples: Discovering functionality via toolbar in application.

2. Simplicity

Definition: The user should only be given as much information as they need.

3. Affordances

Definition: "Relationship between the properties of an object and the capabilities of the agent that determine how the object could be possibly used" (Norman)

  • Tells user how an interface should be used by its very design.

Example: Software buttons, click makes it look like its being depressed. Visualize space of options by a drag bar to select color.

Signifiers

Definition: In-context instructions, such as arrows to indicate which way to swipe or a menu icon to indicate how to access the options.

Affordance vs Signifiers

  1. Affordances, to Norman, are actually inherent properties of a device. (Door bar with latch has inherent property to be pulled)
  2. Perceived affordance: Property attributed to object by human observer. Can be inaccurate (door handles meant to be pushed).
  3. Signifier: Anything that helps with matching perceived affordance with actual affordance.
  4. You can't add affordances. Can only add signifiers.

4. Mapping

Definition: Relationship between interface and their effects in the world

Examples: Monitor display view, layout matches real world. Color range selection, color is shown with the slide bar.

5. Perceptibility

Definition: User’s ability to actually perceive the state of the system

  • Nielsen is interested in perceptibility of the state of the system, like whether things are on or off, while universal design is concerned with the perceptibility of information, like how easy signs can be read.

Example: Light switches (state determined by whether up or down), oven switch (can see where the dial is set).

  • Generally easier to convey with a physical design.
  • Bad example: Ceiling fan, no idea of its state without some observation.

6. Consistency

Definition: We should be consistent both within and across interfaces to minimize the amount of learning the user needs to do to learn our interface. Follow convention if it exists.

Example: URL links should be highlighted in different color. Use consistent hotkeys used by other programs.

7. Flexibility

Definition: Don't force users against their preference.

  • Dix: Advocate user customizability and supporting multiple designs for the same task.
  • Nielsen: Cater to both novice and expert users
  • NC: Accommodate users of various abilities and preferences

8. Equity

Definition: Equity is largely about making the user experience the same for all users

  • Flexibility is a means to achieve that.

9 + 10. Ease and Comfort

Definiton:

  • Mace: Be used efficiently and comfortably without causing fatigue. Appropriate size/space provided.

Examples:

  • Buttons should match human dimensions of fingers. Don't make it too small (ppl may have tremors).

11. Structure

Definition:

  • Constantine & Lockwood: Purposeful strucutre, consistent models apparent and recognizable. Put similar things together, dissimlar apart.

Example: Newspaper layout. Still applies to digital media, though with less text since article can be embedded.

12. Contraints

Definition: Prevent the user from performing erroneously in the first place by constraining their choices/actions.

Example: Password reset screen, explicitly tells you password constraints. Three-prong plug can only be installed a certain way.

Norman's 4 Types of constraints:

  1. Physical - Physically constrained (e.g. power plugs)
  2. Cultural - which side of escalator to stand
  3. Semantic - constraints inherent to the meaning of situation (rearview mirror)
  4. Logical - self-evident based on the situation (e.g. last piece of hardware is logically the last one to be used for hole).

13. Tolerance

Definition: Allow users to undo and redo when mistakes are made.

  • Nielson: Interested in allowing users to poke around. Enhances discoverability.
  • Constantine & Lockwood & Mace: Recovering from mistakes.

14. Feedback

Definition:

  • Norman: Need to be immediate and informative. Poor feedback worse than no feedback.
  • Nielson: Simple language, tells you how to solve the problem.
  • Constantine & Lockwood: Keep users informed of current state, what has changes, errors or exceptions. Clear concise language.

Example:

  • Bad: Windows blue screen of death - scary and obtuse.

15. Documentation

Definition:

  • Nielsen: Better if system can be used without documentation, but if needed make it easy to search, focused on user's task with concrete steps.
In [ ]:
 

2.6 Mental Models and Representations

Topics:

  1. Mental Models
  2. Representations
  3. Metaphors and analogies
  4. User error: slips and mistakes
  5. Learned helplessness
  6. Expert blindspot

Mental Models

Definition: person’s understanding of the way something in the real world works.

  • Helps us make predictions about the world, and check whether outcomes match our mental model.
  • Mental models are used to simulate events
  • In UI, want to make sure user's mental model matches the way our systems actually work. If not, frustration occurs.
  • How?
    1. Design systems that act the way the user expects them to act
    2. Design systems that teach the user how to react

Mental models and education:

  • Often used in education
  • In UI, we're educating users how to behave, but they have low attention span (compared to education context).
  • Need to teach them while they're using the interface.

5 Tips for Learnable Interfaces

(From Dix, Finlay, Abowd, and Beale in their book Human-Computer Interaction)

  1. Predictability - Can the user predict what is going to happen? (e.g. graying out a button)
  2. Synthesizability - Can the user understand the sequence of actions that led to the current state? (e.g. Logs)
  3. Familiarity - Can the user relate actions that they're already familiar with from other experiences (e.g. red = hot, blue = cold)
    • Similar to Norman's affordances.
  4. Generalizability - Can the user use knowledge from another interface onto yours?
    • Similar to Norman's consistency
  5. Consistency - Do similar tasks or operations in the interface behave similarly? (e.g. Ctrl+x is universally "cut", don't do something weird instead)

Representations

Definition: How things are visualized to users, in order to mold their mental model.

Example - Wolf vs Sheep:

  1. Explaining problem via audio is terrible. Use visualizations.
  2. Use symbols that make relationship more explicit by using wolf vs sheep instead of squares and circles.
  3. Leave out distracting details in the visuals (e.g. ships and rivers)
  4. Ideally, show all possible subsequent states (predictability)

Characteristics of Good Representations

  1. Make relationships explicit
  2. Bring objects and relationships together - visualize objects, but make the relationship easy to understand (wolf vs sheep)
  3. Exclude extraneous details - e.g. exclude river and boat component from wolf vs sheep problem.
  4. Expose natural constraints - Make it easy to know what the constraints are, e.g. wolf vs sheep symbolism.

Representations in Interfaces

Example: Google Calendar

  • Day vs Week vs Month representations - each has its purpose
  • Your schedule vs other's schedule - easy to identify conflicts, rescheduling.

Example: Powerpoint animations

  • animation pane represents 1) length of bar - length of animation, 2) placement - indicates relative timing.

Metaphors and Analogies

Definition: Grounding an interface to something that users already know

  • Why? You can get a solid start in teaching them how to use your interface
  • Example: Newspaper websites use same layout as print edition.
  • Trickier: Medium, not necessary a blog, nor a newsfeed.
  • Be careful with being too liberal with extending analogies, or else it will confuse users.
  • Mobile keyboards: using same layout as physical keyboard may not be most efficient, but it is familiar to most users compared to gestural / virtual interfaces.

Metaphors and Analogies <-> Design Principles

  1. Consistency - people reaosn by analogy to past interfaces
  2. Affordance - interfaces should teach the user how the system works
  3. Mapping - representations are improtant because they allow users to map interfaces to task at hand, and predict what is going to happen.

Slips and Mistakes

Types of slips:

  1. Action-based: User performs wrong action, even though they knew the correct action.
    • e.g. Clicking the wrong button from what they meant to.
    • Dialog example: clicking ‘no’ when you mean ‘yes’ would be an action-based slip
  2. Memory lapse: Users forget something they knew how to do.
    • e.g. Forgetting to start a timer, or initiate logging.
    • Dialog example: the fact that message pops up prevents memory lapse slips.

Types of mistakes:

  1. Rule-based: User correctly assess the state, but makes wrong decision based on it.
    • Dialog example: Wanted to save state, but didn't realize clicking "no" would save.
    • From Norman:
      1. Situation is mistakenly interpreted, thereby invoking the wrong goal or plan, leading to following an inappropriate rule
      2. Correct rule is invoked, but the rule itself is faulty, either because it was formulated improperly or because conditions are different than assumed by the rule
      3. Correct rule is invoked, but the outcome is incorrectly evaluated
  2. Knowledge-based: User incorrectly asses the state
    • Dialog example: Didn't realize they wanted to save in the first place.
    • From Norman:
      • When the situation is novel enough that there are no skills or rules to cover it
  3. Memory lapse: Failing to fully execute a plan because distractions or other memory lapses.
    • Dialog example: User shut down computer and never came back.

How to prevent slips and mistakes:

  1. Prevent routine errors by leveraging consistent practices.
    • Present prompts / dialogues in a way that users are used to seeing.
  2. Let interface offload cognitive load to avoid memory lapse errors.
    • Auto-save user's progress
  3. Minimize rule/knowledge-based errors by leveraging good representations
  4. Leverage tolerance principle to make sure repurcussions can never be too bad.

Exercise: Sending a text to the wrong person. Slip or mistake?

  • Answer: Slip, because she knew what the correct action was, but made an action-based slip.
  • It would be a mistake if they thought selecting the wrong person was the right way to text the correct person (which doesn't make much sense).

Learned Helplessness

Definition: A user's sense that they are helpless to accomplish their goals in an interface.

From a designer's perspective: What feedback do you need from your user to figure out how you can help them?

Expert Blindspot

Definition: When you’re an expert in something, there are parts of the task that you do subconsciously, without even thinking about them.

In [ ]:
 

END OF MATERIAL FOR TEST 1


3.5 Prototyping

Definition: Looking at the different ideas available to us and actually build things we can put in front of users.

Goal: Get user feedback as quickly and rapidly as possible, and iterate on improved prototypes.

4 concepts behind prototyping:

  1. Representation - What is the prototype?
  2. Fidelity - How complete or mature is the prototype?
  3. Evaluation - How to evaluate the prototype?
  4. Scope - What is the scope of our prototype?

Representation

  • Move from text-only to more visual.

Fidelity

  • Fidelity = completely or maturity of the prototype
  • Low fidelity: paper or simple drawings, very easy to change.
  • High fidelity: Wireframe, working interface. Harder to put together.

Evaluation

  • Low-fidelity prototype: Evaluate relative function of an interface. Whether it can work doesn't matter
    • Wireframe: Good for evaluating readability of interface.
  • High-fidelity prototype: Evaluate actual performance.

Scope

  • Horizontal: Cover design as a whole. Shallow.
  • Vertical: Cover specific portion of interface. Deep.
  • Usually start with horizontal and move to vertical.

Tradeoffs in Prototyping

  • "Design for feedback" - Purpose of prototypes are to solicit more feedback.
  • High fidelity - more detailed evaluation, but slow
  • Low fidelity - less detailed evaluation, but fast

5 Tips for Prototyping

  1. Keep prototypes easy to change - enables rapid revision and improvement. Don't sweat the small stuff.
  2. Make it clear that it's a prototype - Manage expectations. Make sure feedback is focusing on relevant things (ie. not color selection).
  3. Be creative - Do whatever it takes to get feedback. Toss and build new prototypes if it isn't working.
  4. Evaluate risks - Reduce risk by getting early feedback. Q: "How much would I lose if I found users hate parts of my design they haven't seen yet?"
  5. Prototype for feedback - Goal of prototype for feedback. Don't focus on small details yet.

Prototype Representations

Verbal Prototypes

Definition: Verbally explaining what the prototype is to a user.

  • Pros: Extremely easy to implement
  • Cons:
    • Social desirability bias - Users need to feel comfortable enough to disagree with our ideas.
    • Expert blindspot - Never really know if user actually understands our design.
  • Tips: Describe your new ideas with analogies. User can map new idea to existing ones.

Paper Prototypes

Definition: Draw prototype on paper. Show them the prototype and get their thoughts.

  • Pros:
    • Focuses feedback on aspects that matter (e.g. won't focus on fonts)
    • Can rapidly revise
    • Good for flat interface (screens)

"Card prototyping" - each screen / state is on a different card.

Wizard of Oz

Definition: User interacts authentically with an interface, and a human supplies functionality that hasn't yet been implemented.

  • e.g. New Alexa feature - responses are created by a human.

Pros:

  • Useful for voice interfaces

Wireframing

Definition: More fleshed out version of paper prototype. Start to think about font size, screen real estate, etc.

Physical Prototypes

Definition: Physical form of new idea. Doesn't have to actually work, can use existing devices and recontextualize it (e.g. car keyfob as bluetooth remote). Ask user to use it and provide feedback.

Pros and Cons

Design Lifecycle Revisited

Multi-level Prototyping

Takeaways:

  • Prototyping can and should exist at different levels of fidelity.
  • Important for complex redesign with multiple layers. Trying to redesign all at once is too much.
  • Strategies: Top-down (focus on experience / structure between layers), or bottom-up (focus on specific screens before moving to another).

Exploring HCI: Prototyping

  • New tech will need more creative prototypes (e.g. "Wizard of Oz" prototypes for gestural / voice interaction.
  • Augmented reality / VR - how to prototype?
In [ ]:
 

2.7 Task Analysis

Lesson Outcomes

  • Students will be able to observe a task and develop a GOMS model of the task.
  • Students will be able to observe a task and perform a cognitive task analysis of the task.
  • Students will be able to describe the relative strengths and weaknesses of different approaches to task analysis.

Tasks analysis vs needfinding:

  • Needfinding - Methods to collect information
  • Task analysis - Understanding the structure of the information we gather.

Methods for Task Analysis

Definition: What task are they performing?

Two methods for formally articulating the tasks users are completing.

  1. Human information processor models (GOMS) - Focus on processing, input and output.
    • Simliar to predictor model of the user.
  2. Cognitive Task Analysis - understanding task from user's perspective.
    • SImilar to predictor model of the user.

GOMS (Goals, Operators, Methods, Selection) Models

4 sets of information it proposes gathering about a task:

  1. Goals - user's goal in the system
  2. Operators - operators the user can perform in the system
  3. Methods - methods for achieving goals in the system
  4. Selection - rules for choosing between competing methods

Motivation:

  • Model proposes every human interacting with system has a set of goals they want to accomplish.
  • To accomplish, they have methods, ie. series of operators.

Example: Security System

Pros / Cons of GOMS Model

Cons:

  1. Doesn't address complexity of the problem - There are likely many methods / submethods to a problem. Standard GOMs model doesn't account for it. Alternates exist though (CMN-GOMS, NGOMSL).
  2. Assumes user is already an expert - Doesn't do good job of accounting for novices or user errors.

Pros:

  1. Can formalize user interaction into steps - We can breakdown and quantify efficiency of each step.
    • Ex. Security system - Given two methods, can calculate relative efficiency by time it takes to complete the task.

Takeaway: GOMS model helps us to focus on places where the interface is asking too much of the user

Paper Spotlight: “The GOMS Family of User Interface Analysis Techniques: Comparison and Contrast”

Paper: https://www.di.ubi.pt/~agomes/ihc/artigos/john2.pdf

Paper covers 4 Variations on GOMS - Differs in what additional elements are provided.

  1. KLM-GOMS - Scroll to keystroke-level model
    • How: specifies the operators and execution times for a an action, and sums them to find the complexity of an interaction.
  2. CMN-GOMS
    • Features submethods and conditions in a strict goal hierarchy.
    • Very granular (e.g. moving text, deleting phrases).
    • Used to find places where there is a lot of complexity that can be cut out.
  3. NGOMSL (Natural GOMS Language)
    • "Structured natural-language notation for representing GOMs models and procedure for constructing them"

5 Tips: Developing GOMS Models

  1. Focus on small goals - Identify small, moment-by-moment goals and abstract up from there.
  2. Nest goals, not operators - Breakdown large tasks into smaller tasks.
    • Operators are smallest "atom" of GOMS model, literal actions that must be performed.
    • Can't break down operators.
  3. Differentiate descriptive / prescriptive - Identify whether you're building a model of what people do, or what you want them to do.
    • Don't assume what the users will do.
  4. Assign costs to operators - Take note of how long each action takes.
    • Key to identifying areas of efficiency.
  5. Use GOMS to trim waste - GOMS helps visualize where inefficient operators exist.
    • Combine it with costs to quantify how much time its taking.

Reflections: Task Analysis

Takeaways:

  • Conflict between Behaviorism (e.g. GOMS) and Cognitivism (e.g. congnitive task analysis), different schools of thought.
    • Play out in processor vs predictor-oriented models
  • Behaviorism: Emphasize things that can be observed.
  • Cognitivism: Emphasize internal thought processes.

Cognitive Task Analysis

Definition: General type of method to evaluating how users complete tasks.

  • Concerned with understanding the underlying thought process in performing a task.

4 methods for performing cognitive task analyses

  1. Collect preliminary knowledge - Get familiar with the task and how users perfor mit.
  2. Identify knowledge representations - What kind of things does the user need to know to complete their task?
    • Note that we’re not yet concerned with the actual knowledge they have, only the types or structures of the knowledge they have.
    • e.g. "Does this task involve a series of steps in a certain order? A collection of tasks to check off in any order? A web of knowledge to memorize?"
  3. Apply focused knowledge elicitation methods - Begin to recognize what the user actually knows.
    • e.g. GPS system in cars - identify all the specific actions they take, the knowledge they must have in mind to take those actions, the interruptions that can change their thought process, the equipment involved, and the sensory experience of the user.
  4. Analyze and verify data acquired
    • Confirm with users that our understanding is correct.
    • Formalize it into structures that can be compared and summarized across multiple data-gathering method.
  5. Format results for the intended application
    • Develop models that show what the user was thinking, feeling, and remember at any given time, and make relationships explicit.
    • Example of cognitive task analysis of driving. Middle tasks are highly cognitive rather than observable.

Hierarchical Task Analysis

Problem: Goal of CTA is to build models of human reasoning and decision-making, but often tasks are complex and high-level. Models that are too high-level are useless.

Solution: Break down into smaller tasks, to the point the task could be implemented in a variety of contexts

  • e.g. route monitoring - could be GPS in car, or on a bike, or while walking.

Strengths of breaking down large tasks to hierarchy of tasks:

  1. Abstract out unnecessary detail (e.g. checkout sequence of shopping if using 3rd party)
  2. Modularize design choices (some tasks can be used in a variety of contexts)
  3. Hierarchy makes it more organized and easier to reason over.

Example of shopping checkout - Note real CTAs will be much more complex.

Pros and Cons

Pros:

  1. Emphasizes mental processes - Focus on user's perspective.
  2. Formal enough for interface design

Cons:

  1. Time intensive - takes a lot of time to do well.
  2. May de-emphasize context - Focusing too much on what users think may leave out details that are out in the world (e.g. physical capabilities, co-interaction).
  3. Ill-suited for novices - Useful for experts who have very strong models of how to perform a task, less so for those new to the task.

Other Task Analysis Frameworks

For cognitive models:

  1. CDM - Focus on places where critical decisions occur
  2. TKS - Focus on nature of human's knowledge
  3. CFM - Focuses on complexity
  4. Applied CTA / Skill-based CTA - ways to gather info necessary to create a cognitive model.
  5. Production systems (AI) - prescribe rules for users to follow.
In [ ]:
 

2.8 Distributed Cognition

Definition: "Expanding the unit we use to analyze intelligence from a single mind to the mind equipped with other minds and artifacts and their relationships."

  • Paper, pen example - Solving math comprised of myself, the pencil, and the paper. Sum is able to do a lot more than just myself.

Paper Spotlight, "How a Cockpit Remembers Its Speeds"

Takeaways: The pilot, plus all the other controls, dictates what the "cockpit" remembers.

Problem: During descent, a plane must make various wing configuration changes which are dependent on speed. Pilot must remember a sequence of speeds at which multiple changes must be made (in narrowly-defined times).

  • Causes high cognitive load.

Solutions: Multiple artifacts perform various cognitive roles in the system.

  1. Booklet - Long term memory, contains speeds for their descent based on certain params. Pilot pulls up pages as needed.
  2. Pin the page - Short term memory, pilots pin it to cockpit so they're able to check each other's actions throughout.
    • Short term, information pilots can reason over.
  3. "Speed bugs" - Working memory, notches on the speedometer. Once going past, pilots know they need to change booklet pages and config.
    • Working memory bc they can quickly visually compare. Don't need to remember the time.

Distributed Cognition and Cognitive Load

  • Artifacts add cognitive resources to distribute cognitive load.
  • Helpful when performing actions where cognitive load is high.
  • Example: GPS / cruise control distribute tasks that humans used to control to the car. This decreases cognitive load to the user.

Example - Checkbooks: What artifacts distribute cognition?

  • chair, table, light - no, not serving cognitive roles
  • bills - external memory of bills
  • checkbook - yes, recordkeeper for user's bills

Is distributed cognition as design principle? No.

Significance: Important because distributed cognition interfaces exhibit and extend our cognitive qualities.

  • "Those systems might perceive, they might remember, they might learn, they might act on our behalf: in some way, they’re offloading a cognitive task from us, and as a result, the system comprised of us and the interface is capable of doing more."

Distributed Cognition to Social Cognition

Motiviation: Technology is rapidly growing into the social sphere (e.g. social media), but interfaces are often at odds with how we really think about social interaction.

  • Example: Video game console can communicate private info about your gaming habits that you don't want friends to know.

Design Challenge: Social Cognition

  • Design Challenge: "How do we design a social video gaming system that nonetheless protects against these kinds of perceptions?"
  • Answer: "In-group" - Your behaviors are only seen by those who share them, and thus are in no position to judge them.
    • e.g. Contacts can’t just look up my game-playing habits, but if they’re also playing or interested in playing, they’re alerted that I’m playing as well.

Situated Action

Definition: Focuses on the learners' responsiveness to their environments and the ways in which human action arises in “the flux of real activity” (Nardi, 1996).

  • Interested in novel, situtational problems that arise.
  • Designers are not in charge of how the user will perform the task.
  • Designers must examine what users actually do, and design within that context.

Takeaways:

  1. We must examine the interfaces we design within the context in which they're used.
  2. We must understand that the task the users perform are born out of their interactions with the interface - we don't define it.

    "We can try to structure it as much as we can, but until the users get started, the task doesn’t exist -- and once they get started they play a significant role in defining the task."

Situated Action and Memory

Takeaways:

  • User's situated action, ie. the context of the action, increases recall much better.
  • Why? Because knowing the context means understanding its importance and its dependencies. When it's a list of tasks on a list, that context is gone.

Paper Spotlight: “Plans and Situated Actions: The problem of human-machine communication”

Suchman, L. (1987). Plans and situated actions: The problem of human-machine communication. Cambridge University Press. http://bitsavers.trailing-edge.com/pdf/xerox/parc/techReports/ISL-6_Plans_and_Situated_Actions.pdf

What: Compares between two views of human action.

  1. First view - Organization and significance of action is derived from plans
    • Most often adopted when developing interfaces - users make plans and carry out those plans.
  2. Second view - Plans are derived from situated action.
    • Users simply act in the world, and our plans are derived from those actions.

Significance:

"Rather than assuming the user has a plan in mind that they are actively carrying out, we might consider viewing only their immediate interaction with the current screen."

Activity Theory

What: Large set of theories regarding interactions between various pieces of activity

  • Predates HCI (idea of interacting through an interface comes from this)

Three main contributions of activity theory

  1. Activity theory generalizes unit of analysis from the task to the activity. (ie. why they're doing it and what it means to them)
    • ie. the difference between being required to do the task versus wanting to.
  2. Activity theory puts an emphasis on the idea that we can separate low-level operations with high-level actions and activities.
    • Prior to 1980 when idea came to HCI, HCI was more focused on minute improvements (speed to click, button press, etc).
  3. Activity theory points that users can move up and down in this hierarchy
    • ie. notion of learning - beginners must be aware of every minute action, but experts can do it without thinking.

Paper Spotlight: “Activity Theory and HCI”

Nardi, B. A. (1996). Context and Consciousness: Activity Theory and Human-computer Interaction. MIT Press.

What: Collection of papers on HCI

Significance:

  • Emphasizes activity theory as a descriptive theory for understanding the way system works, rather than predicting how it works.

    "“Activity theory offers a set of perspectives on human activity and a set of concepts for describing that activity” and “This … is exactly what HCI research needs as we struggle to understand and describe “context”, “situation”, “practice”."

  • Back in 1996, they were still constructing the debate on the importance of context in design.

“Studying Context: A Comparison of Activity Theory, Situated Action Models, and Distributed Cognition”

What: Another paper in book, compares and contrasts the three philosophies.

Significance:

"Attention to the shaping force of goals in activity theory and distributed cognition... contrasts with the contingent, responsive, improvisatory emphasis of situated action."

  • Activity theory and distributed cognition are focused on goals, while situated action is interested in improvisation.

"Goals are our musings out loud about why we did something after we have done it"

  • Situated actions views goals to be constructed retroactively, interpreting past actions.

On "persistent structures" - Acitivity theory vs distributed cognition differs on how they evaluate symmetry between people and artifacts.

“Activity theory, with its emphasis on... motive and consciousness... sees artifacts and people as different.”

  • Activity theory regards them as fundamentally different, given that humans have consciousness.

"“Distributed cognition... views people and [artifacts] as conceptually equivalent... “agents” in the system.”"

  • Distributed cognitions puts people and artifacts as equals.
  • Artifacts can have cognitive roles.
In [ ]:
 

3.6 Evaluation

Definition: Evaluation is where we take what we’ve designed and put it in front of users to get their feedback.

Types:

  1. Qualitative - Get early on. Know what ppl like/dislike, whether readable, understandable. Minimize workload/intuitive/easy to learn?
    • "Usable" is visited later.
    • Similar to needfinding
  2. Empirical - Quantitative data.
    • We might want to actually measure whether the time to complete a task has changed or whether the number of sales has increased.
    • Controlled experiments.
    • Need more participants.
  3. Predicitive - Predicting what the results of the user evaluation will be.
    • Evaluated without users. Faster than real participants.

Evaluation Terminology

  1. Reliability - Whether or not some assessment of some phenomenon is consistent.
    • "If we were to conduct the same procedure again, how likely is it that we’d get the same results? That’s reliability."
  2. Validity - Whether or not an assessment is accurate.
    • Data can be consistently inaccurate, hence we want to assess accuracy.
    • "How accurately does our data actually capture the real-world phenomenon we care about? That’s validity."
  3. Generalizability - ie. external validity. Whether or not data we learn can be applied to broader population.
    • "To what extent can we apply these conclusions to people that weren’t in the evaluation? That’s generalizability."
  4. Precision - Measure of how specific an assessment is.
    • "How specific are our conclusions and observations? That’s precision."

5 Tips: What to Evaluate

  1. Efficiency - How long does it take users to accomplish certain tasks?
  2. Accuracy - How many errors do users commit while accomplishing the task?
  3. Learnability - Sit a new user down in front of the interface. Define some standard for expertise. How long does it take the user to hit that level of expertise?
  4. Memorability - User’s ability to remember how to use an interface over time
  5. Satisfaction - When we forget to look for the other metrics, we bottom-out in a general notion of ‘satisfaction’, but that doesn’t mean it’s unimportant.
    • E.g. users’ enjoyment of the system or the cognitive load they experience while using the system
    • To avoid social desirability bias, we might want to evaluate this in creative ways, like finding how many participants actually download an app they tested after the session is over.

"It’s important that you very clearly articulate at the beginning what you’re evaluating, what data you’re gathering, and what analysis you will use

Evaluation Timeline

  • Early on, formative (primary purpose to help redesign). Later, summative (conclude design process, prove that new interface is better)
  • Early eval is more interpretative, qualitative, and informal.
  • Later eval is more empirical, controlled and formal. Demonstrate / assess change.
  • Context - where does eval take place?
    • Early: Lab setting, not live
    • Later: Live, field testing.
    • This lefts us __focus exclusively on the interface early, then transition to the interface in context later.
    • None are hard and fast rules.

Evaluation Design

Series of steps to perform to ensure that your evaluation is useful:

  1. Define the task you're examining
    • Large/small?
  2. Define performance measures
    • quantitative vs qualitative?
    • helps avoid confirmation bias - makes sure we don't just pick observations that confirm hypotheses.
  3. Develop the experiment
    • How to find users' performance on the performance measures.
    • Ask whether assessment measures are reliable and valid, and whether users we are testing are generalizable.
  4. Recruit participants
    • Explain to them their rights (ethics)
  5. Conduct experiment
  6. Analyze data - what do data informus about our performance measures
  7. Summarize - Summary of data that informs ongoing design process
    • What did our data say, what could be improved.

Qualitative Evaluation

Goal: Get qualitative feedback from user. Similar to needfinding

  • Use interviews / think-aloud / focusgrous / survesy/ post-even protocols

Type of Qs:

  1. What did you like? What did you dislike?
  2. What were you thinking while using this interface?
  3. What was your goal when you took that particular action?

A Typical Qualitative Evaluation

Questions you’ll have to answer in designing a qualitative evaluation:

  1. Based on prior experience, or live session?
  2. Synchronous (watch live) / asynchronous (participants complete eval on their own)
    • "Synchronous is usually beneficial because we can see a much greater amount of the interaction that is taking place."
  3. How many prototypes will they be evaluating?
    • If multiple, make sure to vary order to reduce bias.
  4. When do you want to get feedback from the user? (Think-aloud / post-event)
  5. Individual / group feedback?

Capturing Qualitative Evaluation

How to record data:

  1. Record the session
    • Pros: Automatic, fidelity.
    • Cons: Intrusive, doesn't capture on-screen interactions well, difficult to analyze.
  2. Note-taking
    • Pros: Cheap and easy
    • Cons: Slow / not comprehensive
  3. Log behavior in software
    • Pros: Automatic, easy to analyze data
    • Cons: Not as comprehensive, very low-level

"When selecting a way to capture your qualitative evaluation, ask yourself: will be subjects find the camera intrusive? Am I capturing what happens on screen? How difficult with this data be to analyze?"

5 Tips: Qualitative Evaluation

  1. Rune pilot studies - Fix out kinks with easily available subjects before getting expensive data.
  2. Focus on feedback - Don't teach the user, don't explain the rationale of prototype. Goal is to get feedback fro next interface.
  3. Use questions when users get stuck - Get info on why they're stuck and what they're thinking.
  4. Instruct users what to do, but no how to do it - (Doesn't always apply) Let users try to figure out how to perform the task.
  5. Capture satisfaction - self-explanatory, easy to miss.

Empirical Evaluation

Definition: Record quantitative data

  • "The biggest benefit of quantitative analysis is its ability to have us perform objective comparisons."
  • Numeric - e.g. layout of button's effect on purchases, efficiency
  • Interpretation - counting errors, coding survey responses
  • Goal - get verifiable and conclusive results
    • Get data to build new theories of how users think when they're using interfaces
    • "If we wanted to prove that gestural interaction has a tougher learning curve than voice interaction or that an audio interface is just as usable as a visual one, we would need to do empirical evaluation between the two."

Designing Empirical Evaluations

Treatment vs Control

  • Goal: Investigate the comparison between the treatments, and end up with a conclusion about how they’re different, and whether difference is based on new feature.

Assigning participants

  • Q: Do each user participate in one treatment, or both?
    • If each only participates in one: "Split the random participants into two groups, and one-by-one, we have them go through their treatment. At the end, we have data from participants in one condition and data from participants in the other, and we can compare them."
    • Or, each participates in both (within-subjects experiment)
      • Lurking variable: Which treatment each participant sees first? Makes sure to mix it up.
  • Random assignment: Helps control for bias

Hypothesis Testing

Problem: Difference in numbers could arise just by random chance, are they different enough to conclude they're really different?

Solution: Null and hypothesis testing. Steps:

  1. We initially hypothesize that the two things are equal (null hypothesis)
  2. Alternative hypothesis is they are not equal.
  3. Measure whether difference is big enough to accept the alternative hypothesis. How? (Generally) If there is less than a 5% chance that diff could have arisen by random chance. If so, results are statistically significant.

Quantitative Data and Empirical Tests

Motiviation: Null and alternative hypotheses are common to all kinds of hypothesis tests. The specific kind of hypothesis test you conduct, however, depends on the kind of data that you have:

  1. Nominal - Chi-square test checks to see if distribution of values to a number of buckets is the same across two conditions.
    • Independent variable is control/treatment(s). Dependent is distribution of values.
    • Null hypothesis: Distributions are not different. Alternative: Distribution is different.
    • Cons: "Chi-square test doesn’t understand that the values are ordered, and so it doesn’t understand that a systematic shift across the categories is probably more notable."
    • Alternative: Kolmogorov-Smirnov test doesn't take into consideration order of categories. (Or use median test, whether medians of each are the same).
  2. Interval / Ratio - Student's t-test compares the means of two samples to see if there's a significant chance that they're different.
    • Independent variable is control/treatment(s). Dependent is observed values.
    • Null hypothesis: Samples aren’t different, Alternative: They are.
    • Why good for interval/ratio - doesn't depend on absolute 0 point.
    • Warning: We’re only supposed to use t-tests when the data distribution is normal. If not, use Mann-Whitney U test or Kruskal-Wallis H test

Special Statistic Tests (beyond Chi/Student)

Problem: How to handle more than 2 independent variables?

  • Issue: If doing three t-tests or three Chi-square tests, one on each pair.
    • Called repeated testing, and the problem is that it raises the likelihood that we’ll find something that isn’t actually true, which is called a Type I Error.
    • "Remember, we said that we reject a null hypothesis if there’s less than a 5% chance that the data could have been so different due to randomness alone. That means, though, that every time we run an additional test, we’re taking a 1-in-20 chance of finding something that isn’t there."

Solution:

  • Chi-square can handle more than 2 independent variables. However, doesn't tell us what is different, just that there exists differences.
    • "(If differences exists), it’s acceptable to then follow-up with pairwise Chi-square tests between the conditions to drill into what the difference actually is."
  • Interval/ratio: Use ANOVA (Analysis of Variance) when when we have more than two categories and interval or ratio data.
    • One-way ANOVA tests can compare +3 groups simultaneously.
    • Two-way ANOVA looks at two independent variables at the same time.

      - ie. check 3 interfaces against 2 genders to see if performance in a function of both gender and interface selection.

Problem: What if independent variable is interval or ratio?

Solution:

  • Regression analysis used when both independent and dependent variables are interval or ratio data.
  • Binomial tests uses when we have binomial data (ie. binary data)
    • Use binomial t-tests to see if distribution is different from some arbitrary number (e.g. 0.5 to test against random chance)
    • Two-sample binomial t-tests test is 2 distributions are different.

5 Tips: Empirical Evaluation

  1. Control what you can, document what you can’t_ - Try to make your treatments as identical as possible. However, if there are systematic differences between them, document and report that.
  2. Limit your variables_ - Focus on varying only 1 or 2 things, and monitor only a handful of things in response
  3. Work backwards in designing your experiment_ - Decide at the start what question you want to answer, then decide the analysis to use, then decide the data to gather.
  4. Script your analyses in advance_ - Decide in advance what analysis you’ll do, and do it -- if it doesn’t give you the results you want, don’t reanalyze it until it does.
  5. Pay attention to power_ - Participant size dictates how big of an effect you can detect (detecting small effect requires big sample, detecting large effect requires fewer)

Predictive Evaluation

When to use: Only use if we wouldn't otherwise be doing any evaluation.

Types of Predictive Evaluation

Heuristic Evaluation Hand interface and guidelines to a few experts to evaluate.

  • Simple: Get data from experts
  • Complex: Building interaction models (how users think about our interfaces), e.g. GOMS
  • GOMS: Can create model of what we predict users will do in our new interface. Or create user profiles to evaluate whether the new design meets these criteria.
  • Simulation-based evaluation: Construct AI agent that interacts with our interface the way a human would.

Cognitive Walkthrough

Definition: Step through the process of interacting with an interface, mentally simulating at each stage what the user is seeing, thinking, and doing

"Is it reasonable to expect the user to cross the gulf of execution? Is the right action sufficiently obvious? Is the response to the action the one the user would expect?"

Evaluating Prototypes

Goal: Apply multiple evaluation techniques to constantly center our designs around the user

Exploring HCI: Evaluation

In some emerging areas, you’ll also be fighting multiple questions in evaluation.

Take virtual reality, for example: most people you encounter haven’t used virtual reality before. There is going to be a learning curve. How are you going to determine whether the learning curve is acceptable or not? If the user runs into difficulties, how can you tell if those come from your interface or if they’re part of the fundamental VR learning experience?

So, take a moment to brainstorm your evaluation approach for your chosen application area. What kinds of evaluations would you choose, and why?

In [ ]:
 

2.9 Interfaces and Politics

Change: A Third Motivation

Third Motivation: To change the user's behavior

  • Example: Car's beeping with seatbelt - no usability goal, but motivated by safety.

Three goals of HCI:

  1. Help a user do a task
  2. Understand how a user does a task
  3. Change the way a user does a task due to some value we hold, e.g. safety / privacy.

Paper Spotlight: “Do Artifacts Have Politics?”

Langdon Winner, Do Artifacts have Politics? (1980): https://faculty.cc.gatech.edu/~beki/cs4001/Winner.pdf

Takeaway:

  • Push for certain technologies carries with it certain necessary political adjustments.
    • e.g. Nuclear (totalitarian due to inherent danger of it) vs Solar (distributed and egalitarian structure)
  • 2 ways artifacts can be political
    1. Inherently political technologies - Tech that requires top-down hierarchy lend themselves to authoritarian power structures.
    2. "Technical arrangements as forms of order" - Tech itself has no inherent political leanings, but its use in a particular contexts accomplishes a political purpose
      • Ex: "Winner uses the example of a factory in Chicago in the 1880s that replaced workers with automated machines that produced inferior goods as a way of busting up the union."

Negative Change by Design

Example: Robert Moses and Construction of Highways

  • Bridges designed too low to let buses through, intentionally kept lower socio-economic class from visiting parks. Example of "technical arrangements as forms of order".

Positive Change by Design

  • Interfaces can be designed that lead to positive social change through natural interaction with the system.
  • Facebook's "Like" button - intentionally did not add "Dislike" button, only wanted to promote positive social interactions in expense of usability.

Design Challenge: Change by Design

  • Can motivate users to take action by subtle design.
  • E.g. Pokemon Go, motivates people to move intermittently. Keep them from being too static.

Positive Change by Happenstance

"There are numerous examples of positive change happening more as a byproduct of technological advancement than as a goal of it."

Bijker, Of Bicycles, Bakelites, and Bulbs (Toward a Theory of Sociotechnical Change) - https://mitpress.mit.edu/9780262522274/of-bicycles-bakelites-and-bulbs/

  • Bicycle was a driver in both providing female independence and prompting the change in female attire.
  • "Its role in women’s liberation was so significant that Susan B. Anthony once said, “I think bicycling has done more to emancipate women than anything else in the world.”

Negative Change by Happenstance

Example: Internet access

  • Metropolitan locations are generally first to get higher speed internet access.
  • Rural areas end up with slow internet.
  • Causes inequitable access to the internet.

Value Sensitive Design

Definition:

The Value-Sensitive Design Lab at the University of Washington defines Value-Sensitive Design by saying: “Value sensitive design seeks to provide theory and method to account for human values in a principled and systematic manner throughout the design process.”

not only is an interface useful in accomplishing as task and usable by the user, but is it consistent with their values?

Privacy by Design

https://www.ipc.on.ca/wp-content/uploads/resources/7foundationalprinciples.pdf

  • One of the most well-developed application areas of value-sensitive design is privacy by design.
  • Privacy is a value, and privacy by design has aimed to preserve that value in the design of systems.
  • It’s possible to design useful, usable interfaces that don’t take privacy into account.

Paper Spotlight: “Value Sensitive Design and Information Systems”

Friedman Paper: https://www.researchgate.net/publication/229068326_Value_Sensitive_Design_and_Information_Systems

Takeaways:

  • Batya Friedman is one of the co-directors of the Value Sensitive Design Research Lab at the University of Washington, and she co-authored one of the seminal papers on the topic: Value Sensitive Design and Information Systems
  • 3 Investigations for Approaching Value-Sensitive Design:
    1. Conceptual questions - Conceptual investigations are thought experiments where we explore the role of values through questions
      • E.g. Who are stakeholders, are all stakeholders affected?
    2. Empirical investigation questions - go out and use real users, exploring how they make sense of interfaces
      • E.g. How do stakeholders apprehend individual values in the interactive context? How do they prioritize individual values and usability considerations?
    3. Technical investigations - Like empirical investigation, but targets the system rather than users
      • E.g. Is the system compatible with the values of the users?
  • Fundamentals of Value-Sensitive Design
    1. Value-sensitive design should be proactive
    2. Value-sensitive design distinguishes between usability and human values.

Value-Sensitive Design Across Cultures

Takeaway: Values can differ between cultures, need to be sensitive to it (e.g. privacy)

5 Tips for Incorporating Value-Sensitive Design

  1. Start early.
    • Identify the values you want to account for early in the design process, and check on them throughout the design process.
    • The nature of value-sensitive design is that it might have significant connections not just to the design of your interface, but to the very core of the task you’re trying to support.
  2. Know your users.
    • You need to know your users’ values.
    • Certain values are incompatible with one another, or at least present challenges for one another. Privacy as a value is in some ways in conflict with the value of record-keeping. To know what to design, you need to know your users’ values.
  3. Consider both direct and indirect stakeholders.
    • We usually think about direct stakeholders -- those are the people that actually use the system.
    • Value-sensitive design encourages us to think about indirect stakeholders, people that do not use the system but are nonetheless affected by it.
    • When you’re designing the internal system for use by a bank, for example, it’s used by bank employees, but bank customers are likely impacted by the design.
  4. Brainstorm the interface’s possibilities.
    • Think not only about how you’re designing the system to be used, but how it could be used.
    • If you wanted to make a system that made it easier for employees to track their hours, consider whether it could be used by employers to find unjust cause for termination.
  5. Choose carefully between supporting values and prescribing values.
    • Designing for change is about prescribing changes to values, but that doesn’t mean we should try to prescribe values for everyone. A
    • t the same time, there are certainly values held in the world that we would like to change with our interfaces if possible, with regard to issues like gender equality and economic justice.
    • Be careful and be deliberate about when you choose to support existing values and when you choose to try to change them with your interfaces.

Exploring HCI: Interfaces and Politics

Takeaways:

  • Online education
    • Positives: Gave access to those with barriers to education due to social anxiety, disabilities preventing them from attending physical classes.
    • Negatives: May only provide advantages to majority classes (white, middle-income, male). Not much data showing access to women, international, economically disadvantaged students.

Reversing the Relationship

Takeaway: Society changes technology as well (moreoften than other way around)

  • E.g. Subscriptions - technologically not impossible to consolidate all media, but legally restricted by copyright.
  • E.g. Flourescent light bulb - intentionally not designed as efficient as they could be, due to business interests (power companies)
  • E.g. Exclusitivy agreements - No tech reason for a game not to run on other consoles, but business relationships shape the way the tech works.
In [ ]:
 

2.10 Conclusion to Principles

Zooming Out: Human as Processor

  • Processor model distills human's role into goals, operators, methods
  • Narrow view of HCI

Zooming Out: Human as Predictor

  • Task-oriented
  • User actively involved in learning the task
  • Gulf of execution / evaluation
  • Direct engagements
  • Make sure mental models are matching tasks.
  • Tools: Cognitive task analysis

Zooming Out: Human as Participant

  • Focus on context (environment, other users, motivations)
  • Distributed cognition
  • Social cognition - social norms and relationships
  • Positive / negative social change

5 Tips: On-Screen UI Design

  1. Use a grid.
    • Grids are a powerful way of guiding a user’s sight around your interface, highlighting important content, grouping related content, etc. There’s a reason why newspapers and magazines have used grid-based layouts for decades.
  2. Use whitespace.
    • Users are good at consuming small chunks of information at a time.
    • Notice how news articles often use very short paragraphs and highway signs have lots of space around the text.
    • Whitespace works with grids to provide context and guide the user’s visual perception of the interface.
  3. Know your Gestalt principles.
    • Gestalt principles in UI design refer to how users perceive groups of objects.
    • Users group objects together when they’re spatially close together, visually similar, or moving together.
  4. Reduce clutter.
    • The eye has difficulty processing cluttered information, so reduce clutter wherever possible. Grids, whitespace, and Gestalt principles can help with this because they can invisibly communicate content that might otherwise need to be communicated visually.
    • Instead of drawing a box to group controls together, you can surround them with whitespace.
    • Instead of using headers and text to label different portions of some content, you can separate them with a grid. And so on.
  5. Design in grayscale.
    • Color can be a powerful tool, but it also runs awry of universal design. There are enough colorblind individuals in the world that relying on color is problematic.
    • Color can help emphasize the content and structure of your interface, but it shouldn’t be necessary to understand it. Take a stoplight, for example: red means stop and green means go, which is a problem if you’re deuteranopic, or red-green color blind. But the red light is always at the top and the green light is always at the bottom, so if you are deuteranopic, you can still understand what the light is saying. Color emphasizes the content, but the content doesn’t rely on color.

Only Half of the Picture

"There’s one final thing you must understand about the guidelines and heuristics and principles we’ve talked about. They’re only half the picture. They’re necessary for good design, but they aren’t sufficient. You can’t just grab these guidelines off the shelf, throw them at a new task, and expect to have a perfect interface the first time."

In [ ]:
 

3.7 HCI and Agile Development

The demand for rapid HCI

Takeaway:

  • Advent of agile development: Earlier delivery, more continuous improvement, and rapid feedback cycles. How to incorporate into HCI?
  • In the past, getting feedback and fixing mistakes was costly. Now, fixing mistakes can be done "over the air" (ie. "day one patches" in video games, cars) and feedback is very easy to get.
  • Doesn't apply to everything - Anything involving hardware, requiring high-profile first impression, anything with high stakes still require longer design lifecycles.

Commonalities:

  1. Feedback cycles
  2. User feedback
  3. Rapid revisions

Differences:

  1. HCI does it behind the scenes
  2. Agile does it live

Exercise: When to go agile

img

  • (Answers use existing devices and don't have high stakes associated with them)

When to go Agile

img

Paper Spotlight: "Towards a Framework for Integrating Agile Development and User-Centered Design"

Paper (required reading): https://link.springer.com/chapter/10.1007/11774129_15

Takeaways:

  • Similarities between HCI and agile:
    • Both rely on "iterative development process, building on empirical information from previous cycles"
    • Both places heavy emphasis on the user's role in the development process
    • Both emphasize the importance of "team coherence" - whole team should have the user in mind while developing the product.
  • Differences between HCI and agile:
    • HCI places more importance on documentation compared to agile
    • HCI encourages doing research prior to the design work actually starting
  • 5 principles for integrating HCI and agile
    1. User involvement - high user involvement
    2. Collaboration and culture - close team collaboration
    3. Prototyping - "feed developers" with prototypes and user feedback
    4. Project lifecycle - HCI practitioners must have ample time to discover user needs before any code gets released
    5. Project management - HCI/agile must work within a "cohesive project management framework" without being too bureaucratic.

Live Prototyping

Takeaways:

  • These days, constructing an actual interface may be as easy as creating the final prototype (example: wireframing tools, can push them to go "live" very easily)
  • For cases where we are making small revisions, pushing them to prod and getting feedback is a valid way to go.

A/B Testing

  • Rapid software testing between a control and a treatment
  • Not pushed to all users, only a subset.
  • Statistically not different from T-tests
  • Usually rapidly testing small changes with real users
  • Works well with small changes
  • Can easily roll back

Agile HCI in the Design Life Cycle

  • Agile HCI is typical design lifecycle but rate of cycling through it is faster
  • We still need to do a needfinding step first, though.

5 Tips: Mitigating Risk in HCI and Agile Development

  1. Start more traditional - start with more traditional needfinding and prototyping (foundational user research), then shift to agile once you have something up and running.
  2. Focus on small changes - don't build from scratch
  3. Adopt a parallel track method - HCI / Dev team split, former does a sprint of needfinding, prototyping, low-fidelity eval. Latter takes those results for development
  4. Be careful with consistency - Don't mess with user's expectations
  5. Nest your design cycle - Keep dropping in small improvements to move towards large change.
In [ ]:
 

3.8 Conclusion to Methods

Summary: Designing audiobooks for exercisers

The first cycle:

  1. Needfinding - get targeted info about what people want and need
  2. Design alternative - brainstorm a bunch of alternatives
    • think about different scenarios and personans to settle on those with most potential
  3. Prototype - create prototypes for the design alternative
  4. Evaluation - put it in front of users to get initial feedback

The second cycle:

  1. Needfinding - synthesize experiences from eval
  2. Design altenratives - expand on current ideas, flesh them out a bit more, brainstorm them in terms of those personas and scenarios used previously
  3. Prototyping - Increase fidelity of prototypes
  4. Evaluate - get a bit more objective. Try getting empirical evaluation for quantitative results.

The third cycle:

  1. Needfinding - see if certain alternatives just don't work (ie. gesture based tech, too advanced?)
  2. Design alternatives - example: hybird voice/touch interaciton system
  3. Prototype - might go back to low fidelity if deciding to pursue another design alternative
  4. Evaluation - Evaluate with users, find if it's good enough to go ahead to prod.

Research Methods Meet Design Principles

img

Takeaways:

  • The design principles we describe in our other unit are deeply integrated throughout this design life cycle.

"In many ways, design principles capture takeaways and conclusions found by this design life cycle in the past in ways that can be transferred to new tasks."

Needfinding:

  • Human abilities - In uncovering needs, many of our needs are driven by our current understanding of human abilities.
  • Task analysis - Allows us to describe needs in formal ways to aid the interface design process.

Design Alternatives:

  • Direct manipulation gives us a family of techniques that we want to emphasize in coming up with our design alternatives.
  • Mental models provide us an understanding of how the design alternatives might mesh with the user’s understanding of the task.
  • Distributed cognition gives us a view on interface design that lends itself to design at a larger level of granularity.

Prototyping:

  • Design principles give us some great rules of thumb to use when creating our initial prototypes and designs.
  • Our understanding of representations ensures that the prototypes we create match with users’ mental models.
  • Invisible interfaces help us remember that the interface should be the conduit between the user and the task, not the focus of attention itself.

Evaluation:

  • The vocabulary of the feedback cycle, the gulfs of execution and evaluation, give us ways to evaluate the interfaces that we design.
  • The notion of politics in interface allow us to evaluate the interface not just in terms of its usable interactions, but in the types of society it creates or preserves.

Approaches to User-Centered Design

Participatory Design

  • Definition: all the stakeholders -- including the users themselves -- are involved as part of the design team.
  • Pro: Allows the user perspective to be omnipresent throughout the design process.
  • Con: In participatory design one of the designers is the user… but they’re just one user. Don't let them overrepresent their views.

Action Research

  • Definition: methodology that addresses an immediate problem, and researches it by trying to simultaneously solve it.
  • Data gathered on the success of the approach is then used to inform the understanding of the problem and the future approaches
  • Undertaken by actual users
  • Examples:
    • For example, a teacher might engage in action research by trying a new activity in his classroom and reflecting on the results
    • Manager might use action research by trying a new evaluation system with her employees and noting the changes.

Design-based Research

  • Definition: Designers create interventions based on current understanding of the theory and the problem, and use the success of those interventions to improve our understanding of the theory or the problem.
  • Similar to action research, but can be done by outside practitioners as well.
  • It’s especially common in learning sciences research.
  • Example: For example, if we believed a certain intersection had a lot of jaywalkers because the signs had poor visibility, we might interview people at the intersection for their thoughts: or, we could create a solution that assumes we’re correct, and then use it to evaluate whether or not we were correct.
  • Iteration plays a strong role
In [ ]:
 

4.1 Applications: Technology

In [ ]:
 

4.2 Applications: Ideas

In [ ]:
 

4.3 Applications: Domains

In [ ]:
 

5.1 Course Recap

In [ ]:
 
In [ ]:
 

5.3 Next Steps