HCI - Reading Notes

Week 1 - Foundations of HCI

MacKenzie, I.S. (2013). Chapter 1: Historical Context. Human-Computer Interaction: An Empirical Research Perspective.

  • predecessor to HCI is human factors or ergonomics
  • concerned with human capabilities, limitation, performance, and designs that fit within these params
  • HCI is narrowing this definition to human interaction with computing technology

Historical Context

  • "As we may think", Vannevar Bush - memex ("associative thinking"), connect points of interest. ie. hyperlink/bookmarks.
  • Ivan Sutherland's Sketchpad - manipulation of geometric shapes and lines (objects) on a display using a light pen. Significance: direct manipulation of the interface.
  • Invention of the mouse (1964) - Invented by Douglas Engelbart. Direct manipulation on screen. Require "on-screen tracker" to establish correspondence between device space and display space.
    • Other devices, joystick, lightpen, knee-controlled lever, "grafacon". Evaluation, mouse is most accurate, knee-lever is fastest.
  • Xerox star (1981) - first commercially released computer system with a GUI. It had windows, icons, menus, and a pointing device (WIMP). It supported direct manipulation and what-you-see-is-what-you-get (WYSIWYG) interaction.
    • Breaks from CLIs, which use a sequential programming paradigm.
    • Direct manipulation requires different approach. Uses event-driven programming.
  • Birth of HCI (1983) - Three key events as markers: the first ACM SIGCHI conference, the publication of Card, Moran, and Newell’s The Psychology of Human-Computer Interaction (1983), and the arrival of the Apple Macintosh, pre-announced with flyers in December 1983
    • ACM SIGCHI conference - association of professionals who work in the research and practice of computer-human interaction
    • "Psychology of human-computer interaction", Card, Moran, Newel - Concepts, human perceptual input (e.g., the time to visually perceive a stimulus), cognition (e.g., the time to decide on the appropriate reaction), and motor output (e.g., the time to react and move the hand or cursor to a target).
      • Significance: Theory for designers of interfaces. "Convincingly demonstrates why and how models are important and to teach us how to build them."
      • Context is the milieu of basic research in human-computer interaction and related fields.
      • "Whether generating quanti- tative predictions across alternative design choices or delimiting a problem space to reveal new relationships, a model’s purpose is to tease out strengths and weak- nesses in a hypothetical design and to elicit opportunities to improve the design."
    • Apple Macintosh - like Xerox Star, but catered to masses.

Growth of GUIs

Growth of HCI Research

Early topics:

  • Quality, effectiveness, and efficiency of the interface. How quickly and accurately can people do common tasks using a GUI versus a text-based command-line interface?
  • Menu design - recognition (selecting a command) vs recall (typing), depth vs breadth.

Norman, D. (2013). Chapter 1: The Psychopathology of Everyday Things

Two of the most important characteristics of good design are discoverability and understanding

  1. Discoverability: Is it possible to even figure out what actions are possible and where and how to perform them?
  2. Understanding: What does it all mean? How is the product supposed to be used? What do all the different controls and settings mean?

Fields of design:

  1. Industrial design: The professional service of creating and developing concepts and specifications that optimize the function, value, and appearance of products and systems for the mutual benefit of both user and manufacturer (from the Industrial Design Society of America’s website).
  2. Interaction design: The focus is upon how people interact with technology. The goal is to enhance people’s understanding of what can be done, what is happening, and what has just occurred. Interaction design draws upon principles of psychology, design, art, and emotion to ensure a positive, enjoyable experience.
  3. Experience design: The practice of designing products, processes, services, events, and environments with a focus placed on the quality and enjoyment of the total experience.

"We must design our machines on the assumption that people will make errors."

The role of HCD and Design Specializations

  • Starting with a good understanding of people and the needs that the design is intended to meet.
  • Getting the specification of the thing to be defined is one of the most difficult parts of the design, so much so that the HCD principle is to avoid specifying the problem as long as possible but instead to iterate upon repeated approximations.

Fundamental Principles of Interaction

  • Discoverability results from appropriate application of five fundamental psychological concepts covered in the next few chapters: affordances, signifiers, constraints, mappings, and feedback.
  • But there is a sixth principle: the conceptual model of the system. It is the conceptual model that provides true understanding

Affordance

  • The term affordance refers to the relationship between a physical object and a person.
  • An affordance is a relationship between the properties of an object and the capabilities of the agent that determine just how the object could possibly be used. E.g. chair affords ("is for") support, and therefore, affords sitting.
  • Affordances exist even if they are not visible. For designers, their visibility is critical: visible affordances provide strong clues to the operations of things

Signifiers

  • Signifiers: If an affordance or anti-affordance cannot be perceived, some means of signaling its presence is required
  • Signifier refers to any mark or sound, any perceivable indicator that communicates appropriate behavior to a person.
  • Clarification: A sign is NOT an affordance, it is a signifier.
  • Can be delibarte ("PUSH" sign) or unintentional (newly created path by humans).

Affordance vs Signifiers:

  • Affordances are the possible interactions between people and the environment. Some affordances are perceivable, others are not.
  • Perceived affordances often act as signifiers, but they can be ambiguous.
  • Signifiers signal things, in particular what actions are possible and how they should be done. Signifiers must be perceivable, else they fail to function.

Mapping: When the mapping uses spatial correspondence between the layout of the controls and the devices being controlled, it is easy to determine how to use them

Feedback

  • Requirements: immediate, informative, just the right amount.

Conceptual Models

  • A conceptual model is an explanation, usually highly simplified, of how something works.
  • It doesn’t have to be complete or even accurate as long as it is useful.
  • Can be explained to user, or learned by experience.
  • Bad design: When controls suggest a false conceptual model (e.g. refrigerator with 2 controls for freezer / fridge, but are both affected by either control).

System Image

Definition: The system image is what can be derived from the physical structure that has been built (including documentation).

Norman, D. A. (1986). Cognitive engineering.

Goals:

  1. To understand the fundamental principles behind human action and performance that are relevant for the development of engineering principles of design.
  2. To devise systems that are pleasant to use - the goal is neither efficiency nor ease nor power, although these are all to be desired, but rather systems that are pleasant, even fun

Psychological Variables Differ From Physical Variables: In many situations, the variables easily controlled are not those that the user cares about.

  1. Mapping problems: Which control controls what?
  2. Ease of control
  3. Evaluation - determine if correct outcome has been reached.

Gulf of Execution / Evaluation

Week 2 - Research Ethics and Needfinding

MacKenzie, I.S. (2013). Chapter 4: Scientific Foundations. Human-Computer Interaction: An Empirical Research Perspective.

MacKenzie, I.S. (2013). Chapter 4: Scientific Foundations. Human-Computer Interaction: An Empirical Research Perspective. (pp. 121-152). Waltham, MA: Elsevier.

What is research?

3 definitions:

  1. Careful or diligent search - "Search" is key term, trying to find things.
  2. Collecting information about a particular subject - data gathering of a phenomenon.
  3. Research is investigation or experimentation aimed at the discovery and interpretation of facts and revision of accepted theories or laws in light of new facts.
    • In HCI, Experiment = user study
    • Empirical research - encompasses both experimental and non-experiment methods (e.g. buildilng interaction models)
    • Facts - what we seek in experimental research.
    • Theory - hypothesis of a phenomenon
    • Law - More constraining, accepted. e.g. Fitts' law of human motor behavior in HCI domain.

Additional characteristics of research:

  1. Research must be published - Why? Must extend, refine or revise the existsing body of knowledge in the field.
  2. Citations, references, impact - Connects ideas to other ideas. Suports intellectual honesty. Back up assertions.
    • Number of citations to paper = impact (e.g. H-index - quantifies both research prductivity and overall impact of a body of work)
  3. Reproducibility - Research that cannot be replicated is useless

Research versus engineering versus design

  • Engineers and designers are in the business of building things
    • Trade-off: form (design emphasis) and function (engineering emphasis)
  • Research: Narrow focus, small ideas conceived, prototyped, tested, advanced or discarded.
    • research prototype = mockups, not actual products.
    • "Prototypes should command only as much time, effort, and investment as are needed to generate useful feedback and evolve an idea."
    • "Researchers provide the raw materials and processes engineers and designers work with"

What is empirical resaerch?

Definitions:

  1. Originating in or based on observation or experience.
  2. Relying on experience or observation alone, often without due regard for system and theory
  3. Capable of being verified or disproved by observation or experiment

Research methods

Observational
  • What: interviews, field investigations, contextual inquiries, case studies, field studies, focus groups,
  • More qualitative
  • Achieves relevance while sacrificing precision - Real world phenomena are high in relevance, but lack the precision available in controlled laboratory experiments.
  • Focus on why and how
Experimental
  • What: Controlled experiments. Include manipulated variable and response variable (independent vs dependent)
  • Comparison of manipulated variables is key, otherwise not experimental research.
  • The relationship between the independent variable and the dependent variable is one of cause and effect
Correlational
  • What: Look for relationships between variables (e.g. age, income, gender)
  • How: Observation, interviews, surveys, etc.
  • Correlational methods provide a balance between relevance and precision

Observe and measure

How are observations made:

  1. Another human as observer - manual entry
  2. An apparatus is the observer - automatic logs by a computer

Measurement scales:

  1. Nominal - arbitrarily assigning a code to an attribute or a category (license plate numbers, zip codes)
    • Used to count frequency
  2. Ordinal - provide an order or ranking to an attribute
    • Implies ranking
    • Comparison of greater than or less than are possible.
    • Not valid to compute the mean of ordinal data.
  3. Interval - equal distances between adjacent values, but no absolute zero (e.g. temperature Fahrenheit or Celsius)
    • used in questionnaires where a response on a linear scale is solicited (e.g. Likert scale)
  4. Ratio - Ratio data have an absolute zero and support a myriad of calculations to summarize, compare, and test the data.
    • Mathetmatical operations and stats are possible (add/subtract/mean/stdev)
    • Example: Time, occurrence counts
    • normaliation: standardizes and makes easier for comparison (e.g. words-per-minute, error-rate)

Research questions

What: Conduct experimental research to answer (and raise) questions about a new or existing user interface or interaction technique.

Difficulty: People exhibit variable behavior, which affects confidence in our findings.

Questions:

  • Is the new technique any good?
  • Is the new technique better than (interface)?
  • Is the new technique faster than (interface)?
  • Is the new technique faster than (inteface) after a bit of practice?
  • Is the measured entry speed (in words per minute) higher for the new technique than for a (interface) after one hour of use?
Internal validity and external validity

Definition: Accuracy of answer (internal) vs breadth of question (external)

  1. Internal validity (definition) is the extent to which an effect observed is due to the test conditions.
    • Why? We want confidence that the difference observed was actually due to inherent differences between the tech- niques.
  2. External validity (definition) is the extent to which experimental results are generalizable to other people and other situations.
    • Why? To the extent the research pursues broadly framed questions, the results tend to be broadly applicable.

Tradeoffs:

  • Effort to improve external validity through environmental considerations may negatively impact internal validity.
  • The desire to improve external validity through procedural considerations may nega- tively impact internal validity.

Ecological validity vs external validity:

  • Ecological = Methodology (using materials, tasks, and situations typical of the real world)
  • External = Outcome (obtaining results that generalize to a broad range of people and situations).

Comparative evaluations

Takeaway: "A comparative evaluation yields more valuable and insightful results than a single-interface evaluation"

Relationships: circumstantial and causal

Causal relationship: "condition manipulated in the experiment caused the changes in the human responses that were observed and measured"

  • Different from circumstantial (e.g. cigarettes and cancer)
  • Examined by controlled experiments, where only one variable is changed.
  • Caveat: If the variable manipulated is a naturally occurring attribute of participants, then cause and effect conclusions are unreliable.
    • e.g. gender (female, male), person- ality (extrovert, introvert), handedness (left, right)

Research topics

Finding a topic:

  1. Think small - Narrow down the problem to sub-problems.
  2. Replicate - Replicate an existing experiment from literature. This is an empowering process.
  3. Know the literature
  4. Think inside the box - Just get on with your day, but at every juncture, every interaction, think and question. What happened? Why did it happen? Is there an alternative?

Müller, H., Sedley, A., & Ferrall-Nunge, E. (2014). Survey research in HCI

Müller, H., Sedley, A., & Ferrall-Nunge, E. (2014). Survey research in HCI. In J. Olson & W. Kellogg (Eds.) Ways of Knowing in HCI (pp. 229-266). New York: Springer.

What Questions the Method Can Answer

  1. Measure attitudes
  2. Measure intent
  3. Quantify task success
  4. UX feedback
  5. User characteristics - understand a system's users
  6. Interactions with technology - how users interact with technology in broad terms (social, demographic)
  7. Awareness - helps understand people's awareness of existing technologies
  8. Comparisons - compare users' attitudes / perceptions / experiences across segments, time, geographies, etc.

When to avoid surveys

  1. Precise behaviors - gather from log data instead
  2. Underlying motiviations - users often don't know motivation. Use ethnography or contextual inquiry instead.
  3. Usability evaluations - why users succeeded / failed in a task. Use interviews instead

How to Survey

Research goals and constructs
  • Do the survey constructs focus on results which will directly address research goals and inform stakeholders’ decision making rather than providing merely informative data?
  • Will the results be used for longitudinal comparisons or for one-time decisions?
  • What is the number of responses needed to provide the appropriate level of precision for the insights needed?
Population and sampling
  • Random sampling is best, minimizes sampling bias. e.g. random phone number, address-based surveys.
  • Non-probability sampling - snowball recruiting, convenience samples (target ppl easily available). Higher potential for bias.
  • Choosing sample size - determine margin of error. Commonly used are 3-5%. Confidence level indicates how likely the reported metric falls within the margin of error. Typically 95%.
Questionnaire design and biases

Common biases: Satisficing - Respondents use suboptimal amount of effort.

- Respondents are more likely to satisfice when (Krosnick, 1991):
    - Cognitive ability to answer is low.
    - Motivation to answer is low.
    - Question difficulty is high at one of the four stages, resulting in cognitive exertion.
- Avoid by:
    - Keeping answers concise
    - Avoid using same rating scale in series
    - Avoid long surveys
    - Explain importance of survey
    - Avoid trap questions (e.g. "enter 5 in the following box")

Acquiescence Bias - Respondents want to please the surveyer.

- Avoid by:
    1. Using agree/disagree, yes/no, true/false answers
    2. Ask Qs about the underlying construct (?)
    3. Use reverse-keyed constructs (asking same construct both positive and negative).

Social Desirability - respondents answer questions in a manner they feel will be positively perceived by others

- Avoid by allowing anonymous answers.

Response Orer Bias - tendency to select the items toward the beginning or the end of an answer or scale.

Question Order Bias - Each question in a survey has the potential to bias each subsequent question by priming respondents

Review and survey pretesting

Cognitive Pretesting - take the survey while using the think-aloud protocol (similar to a usability study).

Field Testing - Piloting the survey with a small subset of the sample

Implementation and launch

Monitoring Survey Paradata

  • Click-through rate: Of those invited, how many opened the survey.
  • Completion rate: Of those who opened the survey, how many finished the survey.
  • Response rate: Of those invited, how many finished the survey.
  • Break-off rate: Of those who started, how many dropped off on each page. • Completion time: The time it took respondents to finish the entire survey.

Maximizing response rates: "Total Design Method":

  1. Week 1: Initial request with survey
  2. Week 2: Reminder postcard
  3. Week 4: Replacement survey to non-respondents
  4. Week 7: Second replacement survey to non-respondents.

One strategy to maximize the benefit of incentives is to offer a small non-contingent award to all invitees, followed by a larger contingent award to initial non-respondents (Lavrakas, 2011).

Data analysis and reporting

Cleaning:

  1. Dedupe
  2. Remove "speeders"
  3. Remove "straight liners"
  4. Fix missing data

Assessment:

  1. Low inter-item reliability - Respondents that give inconsistent or unreliable responses may signify they were not paying attention to questions.
  2. Outliers - 2 to 3 standard deviations from the mean.
  3. Inadequate open-ended responses - often lead to low quality response.

Hypothesis testing - probability of a hypothesis being true when comparing groups (using t-test, ANOVA, Chi-square)

Inferential statistics can also be applied to identify connections among variables:

  1. Bivariate correlations are widely used to assess linear relationships between variables.
  2. Linear regression - proportion of variance in a continuous dependent variable.
  3. Logistic regression - predict change in probability of getting a particular value in a binary variable.
  4. Decision trees - probabilities of reaching specific outcomes
  5. Factor analysis - identify groups of covariates, reduce large number of variables into smaller set.
  6. Cluster analysis - categorizing segments.

Analysing Open-ended Responses:

  1. Coding - transform qualitative data to quantative
  2. Interrater reliability - Cohens Kappa

Week 3 - Invisible Interfaces and Human Abilities

Norman, D. (2013). Chapter 2: The Psychology of Everyday Actions

https://gatech.instructure.com/courses/340124/files/folder/Required%20Readings

Gulf of Evaluation reflects the amount of effort that the person must make to interpret the physical state of the device and to determine how well the expectations and intentions have been met

  • Mitigate with feedback and good conceptual model.

7 stages of action:

The specific actions bridge the gap between what we would like to have done (our goals) and all possible physical actions to achieve those goals.

  1. Goal (form the goal)
  2. Plan (the action)
  3. Specify (an action sequence) 4. Perform (the action sequence)
  4. Perceive (the state of the world)
  5. Interpret (the perception)
  6. Compare (the outcome with the goal)

Overlearning: Skills where performance is effortless, done automatically with little or no awareness.

Systems of Cognition

Subconscious and conscious systems of cognition (Table)

Human Cognition and Emotion

  1. Visceral level - "lizard brain". Basic protective mechanisms. Quick judgments. Completely subconscious.
    • In design: Immediate perception (ring tone, appearances).
  2. Behavioral level - learned skills, largely subconscious. E.g. motor skills.
    • In design: "For designers, the most critical aspect of the behavioral level is that every action is associated with an expectation."
  3. Reflective level - conscious cognition. Deep understanding.
    • In design: Reflection / looking back, evaluating, causality.
    • Memories are powerful tools in design. e.g. brand impact.

_"All three levels of processing work together to determine a person’s cognitive and emotional state. High-level reflective cognition can trigger lower-level emotions. Lower-level emotions can trigger higher-level reflective cognition." (Norman, p.55)

"Emotional Design" - design that uses all three.

Learned Helplessness

Definition: Situation where people experience repeated failure at a task. Thus, they decide the task can't be done and stop trying.

  • Ppl tend to blame themselves when tech doesn't work.
  • Math curriculum is like this. Each lessons assumes full knowledge of prior lesson.

Positive Psychology

  • Do not blame people when they fail to use your products properly.
    • e.g. "Didn't you read the manual?"
  • Take people’s difficulties as signifiers of where the product can be improved.
  • Eliminate all error messages from electronic or computer systems. Instead, provide help and guidance.
  • Make it possible to correct problems directly from help and guidance messages. Allow people to continue with their task: Don’t impede progress—help make it smooth and continuous. Never make people start over.
  • Assume that what people have done is partially correct, so if it is inappropriate, provide the guidance that allows them to correct the problem and be on their way.
  • Think positively, for yourself and for the people you interact with.

How Technology Can Accommodate Human Behavior

"When we collaborate with machines, it is people who must do all the accommodation. Why shouldn’t the machine be more friendly?"

"Many machines are programmed to be very fussy about the form of input they require, where the fussiness is not a requirement of the machine but due to the lack of consideration for people in the design of the software."

"Designers should strive to minimize the chance of inappropriate actions in the first place by using affordances, signifiers, good mapping, and constraints to guide the actions ... When people understand what has happened, what state the system is in, and what the most appropriate set of actions is, they can perform their activities more effectively."

The Seven Stages of Action: Seven Fundamental Design Principles

Seven stages of action

  1. What do I want to accomplish?
  2. What are the alternative action sequences?
  3. What action can I do now?
  4. How do I do it?
  5. What happened?
  6. What does it mean?
  7. Is this okay? Have I accomplished my goal?
  • Feedforward: Info that helps answer question of execution (doing)
  • Feedback: Info that aids in understanding what happened.

Seven fundamental principles of design

  1. Discoverability: Easy to determine what actions are possible
  2. Feedback: Information on whether the action they did anything, and what state it is now in.
  3. Conceptual model: The designer's intended mental model, or user's set of ideas about how the system is organized and operates.
  4. Affordances: The proper affordances exist to make the desired ac- tions possible.
  5. Signifiers: Effective use of signifiers ensures discoverability and that the feedback is well communicated and intelligible.
  6. Mappings: The relationship between controls and their actions fol- lows the principles of good mapping, enhanced as much as possible through spatial layout and temporal contiguity.
  7. Constraints. Providing physical, logical, semantic, and cultural constraints guides actions and eases interpretation.

MacKenzie, I.S. (2013). Chapter 2: The Human Factor. Human-Computer Interaction: An Empirical Research Perspective.

https://gatech.instructure.com/courses/340124/files/folder/Required%20Readings

Time scale of human action

What: Descriptive model of a human, different types of human actions in timneframes within which the actions occur.

Model's four bands:

  1. Biological band - quantitative, experimental
  2. Cognitive band -
  3. Rational band -
  4. Social band - qualitative, non-experimental
  • Each band is divided into three levels, 9 levels in total.

Human Factor

  • Dash is where the interaction takes place between human and machine.

Sensors

Vision

  • Light has intensity (brightness) and frequency (perception of color)
    • Fixations: eyes stationary, taking in environment
    • Sacades: Quick, 30-120ms.
  • Scanpath: Tracking eye movement. How users interpret an image / web page / etc. Has implications in advertising.

Hearing

Components:

  • Intensity: Sound pressure level. Painful at 120-140dB.
  • Frequency: Pitch. Humans perceive 20-20kHz
  • Timbre: Harmonic richness of sound.
  • Envelope: Change in amplitude over time.

Touch

Sensors: skin, muscles, bones, joints, and organs

  • Gets info about temperature, shape, texture, or position of the object, or the amount of resistance
  • Relelvant in augmenting UX through tactile feedback.

Smell and taste

  • Generally hard to incorporate into HCI. Brewster et al. (2006) studied smell as an aid in searching digital photo albums.

Responders

What: Motor control to affect the environment.

  • Movement of the limbs is tightly coupled to the somatosensory system
  • Proprioception: the coordination of limb movement and position through the perception of stimuli within muscles and tendons.)

Et cetera

(This chapter keeps going and going...)

Hutchins, E. L., Hollan, J. D., & Norman, D. A. (1985). Direct manipulation interfaces. Human–Computer Interaction

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.122.4927&rep=rep1&type=pdf

  • Covers distance and engagement
  • Two forms of distance: Semantic (how hard is it to know what do) and articulatory (how hard is it to execute it)

Week 4 - Design Alternatives

Faste, H., Rachmel, N., Essary, R., & Sheehan, E. (2013, April). Brainstorm, Chainstorm, Cheatstorm, Tweetstorm: new ideation strategies for distributed HCI design

Faste, H., Rachmel, N., Essary, R., & Sheehan, E. (2013, April). Brainstorm, Chainstorm, Cheatstorm, Tweetstorm: new ideation strategies for distributed HCI design. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1343-1352). ACM.

http://henrybacondesign.com/wp-content/uploads/2017/02/Brainstorm_Chainstorm_Cheatstorm_Tweetst.pdf

Takeaways

  • "Chainstorming" - Variation of Brainstorm that emphasizes the connection between ideas. Participants build upon each other's suggestions, forming a chain of ideas. This approach helps in exploring interconnected concepts and encourages the development of more complex and innovative solutions.
  • "Cheatstorming" - Do bunch of brainstorm sessions, save ideas. Now next brainstorm session, people vote on existing brainstormed ideas to fulfill the prompt
  • "Tweetstorming" - Social platform for ideation. Chainstorming via twitter. Custom website that allows users to see other users’ questions, reply to them selectively, browse other users’ replies to prompts, and vote on their favorite ideas to select them
  • Can be used individually or combined.
  • Significance: Ideation through redistribute existing ideas in different contexts to "unconventionalize" it.

Yang, M. C. (2009). Observations on concept generation and sketching in engineering design.

Yang, M. C. (2009). Observations on concept generation and sketching in engineering design. Research in Engineering Design, 20(1), 1-11.

https://pdfs.semanticscholar.org/dc8f/c7d181f4994dc7044ecb3e9e9454b765886f.pdf

Takeaways

  • Hypothesis 1: The quantity of concepts generated at the beginning of a design project correlates with design outcome.
  • Hypothesis 2: The quantity of sketches generated during a project correlates with its design outcome.
  • Outcome: Quantified by final grade and contest results.
  • Conclusion 1: Concept generation measured in the form of morphology charts showed a statistically significant correlation with both project and final term grade in the introductory course.
  • Conclusion 2: Morphology charts in the advanced courses did not show a statistically significant correlation.
  • Conclusion 3: Designer can sketch very little overall and achieve a better design grade, as long as the bulk of dimensioned drawings (and perhaps prototyping) are cre- ated early on in the design process

Rogers, Y., Sharp, H., & Preece, J. (2011). Chapter 6: The Process of Interaction Design

Rogers, Y., Sharp, H., & Preece, J. (2011). Chapter 6: The Process of Interaction Design. In Interaction Design: Beyond Human-Computer Interaction. John Wiley & Sons.

http://www.wiley.com/legacy/wileychi/interactiondesign/pdf/ID_ch6.pdf

Takeaways:

  • The interaction design process consists of four basic activities: identifying needs and establishing requirements, developing alternative designs that meet those requirements, building interactive versions of the designs so that they can be communicated and assessed, and evaluating them.
  • Key characteristics of the interaction design process are explicit incorporation of user involvement, iteration, and specific usability criteria.
  • Before you can begin to establish requirements, you must understand who the users are and what their goals are in using the device.
  • Looking at others’ designs provides useful inspiration and encourages designers to consider alternative design solutions, which is key to effective design.
  • Usability criteria, technical feasibility, and users’ feedback on prototypes can all be used to choose among alternatives.
  • Prototyping is a useful technique for facilitating user feedback on designs at all stages.
  • Lifecycle models show how development activities relate to one another.
  • The interaction design process is complementary to lifecycle models from other fields.

Software engineering lifecycle models:

  1. Waterfall lifecycle - linear model
  2. Spiral lifecycle - iterative framework, uses risk analysis and prototyping.
  3. Rapid Applications Development (RAD)
    • 6 month time box for system delivery
    • JAD (Joint Application Developemnt) workshops, users/devs collaborate to flesh out requirements of the system.
  4. Star Lifecycle Model
    • No ordering of activities, all interconnected.
    • Must always evaluate at end of each activity.
  5. Usability Engineering Lifecycle
    • three tasks: requirements analysis, design/ testing/development, and installation
    • uses a "style guide" as mechanism for capturing/disseminating usability goals of the project.

UAE Diagram

In [ ]:
 

Week 5 - Mental Models and Representations

MacKenzie, I.S. (2013). Section 3.4: Mental Models & Metaphor

MacKenzie, I.S. (2013). Section 3.4: Mental Models & Metaphor. Human-Computer Interaction: An Empirical Research Perspective. (pp. 88-92). Waltham, MA: Elsevier. https://gatech.instructure.com/courses/340124/files/folder/Required%20Readings

Takeaways:

  • Implementation models - Bad. "impose on the user a set of interactions that follow the inner workings of an application", doesn't follow user's conceptual model
    • Example: "software-based fax product where the user is paced through a series of agonizing details and dialogs"

Good mental models:

  1. Icons - not all are immediately understood. Can add balloons as signifiers.
  2. Compass / clock face as metaphor for direction
    • Example: Using clock metaphor to navigate the blind.

MacKenzie, I.S. (2013). Section 3.8: Interaction errors

MacKenzie, I.S. (2013). Section 3.8: Interaction errors. Human-Computer Interaction: An Empirical Research Perspective. (pp. 111-116). Waltham, MA: Elsevier. https://gatech.instructure.com/courses/340124/files/folder/Required%20Readings

4 Examples of Interaction Errors

  1. Losing info while working on their computers
  2. Password prompt - Is my caps lock on or off? What I type is hidden, so can't verify.
  3. Selecting text and dragging - scrolling down to edge of screen changes velocity-control.
  4. Inconsistencies in focus advancement

Takeaways:

  • "The absence of expectations keeps the user on guard"
  • "Where the consequences of errors are small, such as an extra button click or a gaze shift, errors tend to linger"
  • "User experiences exist as collections of microstrategies. Whether booking a vacation online or just hanging out with friends on a social networking site, big actions are collections of little actions. To the extent possible, user actions form the experience, our experience. It is unfortunate that they often exist simply to serve the needs of the computer or applicatio"
  • "Another reason little errors tend to linger is that they are often deemed user errors, not design, programming, or system errors"

Norman, D. (2013). Chapter 5: Human Error? No, Bad Design

Norman, D. (2013). Chapter 5: Human Error? No, Bad Design. In The Design of Everyday Things: Revised and Expanded Edition. (pp. 162-216). Arizona: Basic Books. https://gatech.instructure.com/courses/340124/files/folder/Required%20Readings

Takeaways:

  • Most common errors, "nature of the tasks and procedures that require people to behave in unnatural ways"
    • Others: Time stress, deliberate risk taking
  • Blaming the individual doesn't solve systemic errors.
  • Root cause analysis: investigate accident until the single, underlying cause is found.
  • Five Whys: originally developed by Sakichi Toyoda and used by the Toyota Motor Company
    • Ask the why five times, keep asking until you uncovered the true underlying causes.
    • Doesn't guarantee success in finding root cause(s)
  • "Why do people err? Because the designs focus upon the requirements of the system and the machines, and not upon the requirements of people."
  • Deliberate risk taking - Types
    1. Frequent noncompliance
    2. Inappropriate rules or procedures that invite violation, and culture that rewards it.
  • Slips and errors (covered in lecture notes)
  • Types of action slips:
    1. Capture slips: situation where, instead of the desired activity, a more frequently or recently performed one gets done instead
    2. Description-similarity slips: Error by acting upon an item similar to the target.
    3. Mode-error slips: Error when a device has different states in which the same controls have different meanings.
  • Social pressures lead to mistakes. How to solve? Need to reward safety and put it above economic pressures.
  • Checklists - collaboratively followed checklists are effective.
    • Bad to impose a sequential structure to task execution (unless the task itself requires it).
  • Toyota's Jidoka philosophy - "automation with a human touch"
    • If a worker notices something wrong, the worker is supposed to report it, sometimes even stopping the entire assembly line if a faulty part is about to proceed to the next station.
    • Punishment is applied to the group, motivates reporting.
  • "Poka-yoke", Shigeo Shingo - constraints to avoid error.
    • e.g. add simple fixtures, jigs, or devices to constrain the operations so that they are correct
  • Designing for error:
    • Understand the causes of error and design to minimize those causes.
    • Dosensibilitychecks.Doestheactionpassthe“commonsense”test?
    • Make it possible to reverse actions—to “undo” them—or make it harder to do what cannot be reversed.
    • Make it easier for people to discover the errors that do occur, and make them easier to correct.
    • Don’t treat the action as an error; rather, try to help the person complete the action properly. Think of the action as an approximation to what is desired.
  • Design lessons:
    1. Add constraints to block errors.
    2. Allow "undo"
    3. Liberal use of confirmation and error messages
    4. Implement "sensibility checks" - capture outlier actions, they're probably a mistake/slip.
  • Swiss cheese model of errors - many small errors lead to a catastrophe. Prevent by:
    • Adding more slices of cheese
    • Reduce the number of holes, or make them smaller
    • Alert the human operators when holes have lined up.

Design Principles for Dealing with Error

  1. Put the knowledge required to operate the technology in the world.
    • Don't keep it in your head. Share it and have collective responsibility.
  2. Use the power of natural and artificial constraints: physical, logical, semantic, and cultural.
  3. Bridge the two gulfs, the Gulf of Execution and the Gulf of Evaluation.
    • On the execution side, provide feedforward information: make the options readily available.
    • On the evaluation side, provide feedback: make the results of each action apparent.

Mander, R., Salomon, G., & Wong, Y. Y. (1992, June). A “pile” metaphor for supporting casual organization of information

Mander, R., Salomon, G., & Wong, Y. Y. (1992, June). A “pile” metaphor for supporting casual organization of information. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 627-634). ACM. http://www.cs.columbia.edu/~feiner/courses/csw4170/resources/p627-mander.pdf

Takeaway:

  • The authors present a prototype system called PILE, which allows users to create virtual piles of digital documents or files. These piles function as informal groupings that can be easily created, modified, and navigated. The system incorporates visual representations of the piles, providing a graphical interface that mimics physical piles of papers.
  • Overall, the paper presents the concept of using a pile metaphor as an alternative approach to organizing digital information, emphasizing its flexibility and ease of use for casual organization purposes. The PILE system demonstrates the viability of this approach and highlights the potential benefits it offers for users in managing their digital information.

Results:

  • Although each user had a clear preference for one of our methods of pile creation (“pile-centered” or “document-centered”), neither method was judged to be clearly superior.
  • In the “document-centered” model, users liked the ability to grab an individual document within a pile.
    • A problem with this model was that users were not sure how to move a pile as a unit, since selecting any part of the pile led to moving an individual item rather than the pile as a whole.
  • In the “pile-centered” model, users liked the way the system automatically aligned the items in the pile, the ability to move a pile as a unit, and the highlight- ing that indicated a pile was ready to accept an item.
    • A problem with this model was the difficulty of selecting an individual item within the pile,

END OF MATERIAL FOR TEST 1


Week 6 - Prototyping

Houde, S., & Hill, C. (1997). What do prototypes prototype?

Houde, S., & Hill, C. (1997). What do prototypes prototype? In M. Helandar, T.K. Landaeur, & P. Prabhu (Eds). Handbook of Human-Computer Interaction, 2. (pp. 367-381). Elsevier Science. http://www.itu.dk/people/malmborg/Interaktionsdesign/Kompendie/Houde-Hill-1997.pdf

"The goal of this chapter is to establish a model that describes any prototype in terms of the artifact being designed, rather than the prototype's incidental attrib- utes."

"By focusing on the purpose of the prototype--that is, on what it prototypes--we can make better decisions about the kinds of prototypes to build."

Goals of prototyping new functionalities:

  1. If the function is well understood, but the goal is to present it in a new way - Focus on prototyping how the new artifact looks and feels.
  2. If the artifact's function is based on a new technique - Focus on prototyping how to implement the design.

Problems of Prototypes

  • Organization may have narrow view of what a prototype should be.
  • Focusing too much on the attributes of the prototype itself (ie. which tool was used to create it, fidelity).
    • Resolution: Amount of detail
    • Fidelity: Closeness to the eventual design.

Prototype of a Model

What prototypes prototype:

  1. Role - The way in which the feature is useful to users
  2. Implementation - Technique of using the feature, "nuts and bolts"
  3. Look and feel - Sensory experience of using the feature

Integration prototypes - represent the "complete user experience" of an artifact.

Takeaways

  1. Define "prototype" broadly - simple can be effective
  2. Build multiple prototypes - make a lot of simple prototypes, be prepared to toss them.
  3. Know your audience - Choose how polished you want your prototype to be depending on your audience.
  4. Know your prototype - Be clear about what design questions are being explored with a given prototype.

Beaudouin-Lafon, M., & Mackay, W. (2003). Prototyping tools and techniques. Human Computer Interaction-Development Process. (pp. 101-142).

Beaudouin-Lafon, M., & Mackay, W. (2003). Prototyping tools and techniques. Human Computer Interaction-Development Process. (pp. 101-142). https://www.lri.fr/~mackay/pdffiles/Prototype.chapter.pdf

Takeaways

  • More focused on interactive system prototypes, as opposed to other fields (architectural, etc).
  • What prototypes do:
    1. Support creativity by generating ideas
    2. Encourage communication
    3. Encourage early evaluation.
  • How prototypes are analyzed
    1. Representation - forms of the prototype
    2. Precision - level of detail (resolution)
    3. Interactivity - extent to which the user can actually interact
    4. Evolution - expected life-cycle of the prototype.
  • Offline prototyping is valueable even in software
    • fast to iterate
    • allows more creativity, less constraints
    • can be created by non-programmers (important)
  • Design space: constrains design possibilities along some dimensions, while leaving others open for creative exploration.
    • Expand: brainstorming, video brainstorming
    • Contract: Selecting alternatives
  • Prototype strategies:
    1. Horizontal - breadth
    2. Vertical - depth
    3. Task-oriented - Fulfill a task start to finish
    4. Scenario - Less focus on individual, independent tasks but follow a realistic scenario of how it would be used in a real-world setting.
  • Goes into details of UI design, particularly web design topics

Fender, A. R. & Holz, C. (2022). Causality-preserving Asynchronous Reality.

Fender, A. R. & Holz, C. (2022). Causality-preserving Asynchronous Reality. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. https://programs.sigchi.org/chi/2022/index/content/68789

Takeaways:

  • Augmented reality - problem is bridging between virtual world and reality.
  • Solution: Create a system where someone in virtual world can play back events (ie. "causality graph of co-dependent events"). Gives context to what was going on when they were in virtual world.
  • How it works:
    1. Focus mode: User blocks outside world.
    2. Someone else might interrupt, or leave something in their room.
    3. User, approaching an object left, sees a playback of the event.
  • Motivation: Make synchronous daily activities able to be retrieve asynchronously.
    • Asynchronous communication is still completely explicit, ie. sender must consciously decide to send an email, voice recording, etc.
    • But has advantage. that receive may choose when to process the message. This paper tries to do this in real-life events.
  • Contributions:
    1. Asynchronous Reality as a concept.
    2. System called AsyncReality as an instance of said concept.
  • Implementation challenge: "Not the playback itself, but how and when to trigger event playback and in which order to play back the events"

Kim, J., Choi, Y., Xia, M., & Kim, J. (2022). Mobile-Friendly Content Design for MOOCs: Challenges, Requirements, and Design Opportunities.

Kim, J., Choi, Y., Xia, M., & Kim, J. (2022). Mobile-Friendly Content Design for MOOCs: Challenges, Requirements, and Design Opportunities. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. https://programs.sigchi.org/chi/2022/index/content/68914

Abstract:

Most video-based learning content is designed for desktops without considering mobile environments.

We (1) investigate the gap between mobile learners’ challenges and video engineers’ considerations using mixed methods and (2) provide design guidelines for creating mobile-friendly MOOC videos.

To uncover learners’ challenges, we conducted a survey (n=134) and interviews (n=21), and evaluated the mobile adequacy of current MOOCs by analyzing 41,722 video frames from 101 video lectures.

Interview results revealed low readability and situationally-induced impairments as major challenges. The content analysis showed a low guideline compliance rate for key design factors.

We then interviewed 11 video production engineers to investigate design factors they mainly consider. The engineers mainly focus on the size and amount of content while lacking consideration for color, complex images, and situationally-induced impairments.

Finally, we present and validate guidelines for designing mobile-friendly MOOCs, such as providing adaptive and customizable visual design and context-aware accessibility support.

Week 7 - Context and Distributed Cognition

Nardi (1992). Studying Context: A Comparison of Activity Theory, Situated Action Models, and Distributed Cognition

Nardi, B. (1992). Studying context: A comparison of activity theory, situated action models and distributed cognition. In B. Nardi (Ed.) Context and Consciousness: Activity Theory and Human-Computer Interaction. (pp. 35-52). MIT Press.

Situated Action Models

Definition:

"Situated action models emphasize the emergent, contingent nature of human activity, the way activity grows directly out of the particularities of a given situation"

Compared to other domains, focused on situated action:

"The focus of study is situated activity or practice, as opposed to the study of the formal or cognitive properties of artifacts, or structured social relations, or enduring cultural knowledge and values. "

"A central tenet of the situated action approach is that the structuring of activity is not something that precedes it but can only grow directly out of the immediacy of the situation "

Unit of analysis:

  • Setting: "A relation between acting persons and the arenas in relation with which they act"
  • Arena: "Stable institutional framework"
  • "The unit of analysis is thus not the individual, not the environment, but a relation between the two."

Activity Theory

"The activity itself is the context."

  • What takes place in an activity system composed of object, actions, and operation, is the context

Unit of analysis: Activity

  • Activity composed of subject, object, actions, and operations (Leont'ev, 1974)
  • Subject: Person or a group engaged in an activity
  • Object (ie. "objective"): Held by the subject and motivates activity, giving it a specific direction
    • Objects can be transformed in the course of an activity; they are not immutable structures
  • Action: Goal-directed processes that must be undertaken to fulfill the object. different actions may be undertaken to meet the same goal
    • Actions are similar to what are often referred to in the HCI literature as tasks
  • Artifacts: instruments, signs, language, and machines, that mediate activity and are created by people to control their own behavior
    • Mediation by artifacts: "Artifacts carry with them a particular culture and history (Kuutti 1991) and are persistent structures that stretch across activities through time and space."

Distributed Cognition

Unit of analysis: A cognitive system composed of individuals and the artifacts they use

  • Systems have goals (e.g. cockpit is for completing a flight)
  • Coordination of artifacts to function as a whole

Differences Between Activity Theory, Situated Action Models and Distributed Cognition

  1. The Structuring of Activity
    • Treatment of motive and goals
      • "In activity theory, activity is shaped first and foremost by an object held by the subject"
      • "A distributed cognition analysis begins with the positing of a system goal, which is similar to the activity theory notion of object, except that a system goal is an abstract systemic concept that does not involve individual consciousness."
    • "Attention to the shaping force of goals in activity theory and distributed cogntion, be they conscious human motives or systemic goals, contrasts with the contingent, responsive, improvisatory emphasis of situated action."
    • "In activity theory and distributed cognition, by contract, an object-goal is the beginning point of analysis. An object precedes and motivates activity."
    • "In activity theory and distributed cognition, an object is (partially) determinative of activity; in situated action, every activity is by definition uniquely constituted by the confluence of the particular factors that come together to form one ``situation.''"
    • Uniqueness of Situated Action - highly improvisational
      • "The ability to bring operations to a conscious level, even if only partially, is an aspect of the dynamism of the levels of activity as posited by activity theory."
      • "In situated action, goals and plans cannot even be realized until after the activity has taken place, at which time they become constructed rationalizations for activity that is wholly created in the crucible of a particular situation."
  2. Persistent Structures - What role do artifacts, institutions, and cultural values play in shaping activity
    • "For both activity theory and distributed cognition, persistent structures are a central focus."
    • "Situated action models less readily accommodate durable structures that persist over time and across different activities."
      • "Situated action accounts may then exhibit a tension between an emphasis on that which is emergent, contingent, improvisatory and that which is routine and predictable."
    • "Of the three frameworks, distributed cognition has taken most seriously the study of persistent structures, especially artifacts."
  3. People and Things: Symmetrical or Asymmetrical?
    • "Kaptelinin (chapter 5, this volume) points out that activity theory differs fundamentally from cognitive science in rejecting the idea that computers and people are equivalent."
    • "Activity theory, with its emphasis on the importance of motive and consciousness—which belong only to humans—sees artifacts and people as different."
    • "Distributed cognition, by contrast, views people and things as conceptually equivalent; people and artifacts are \"agents\" in a system."
    • "The activity theory position would seem to hold greater potential for leading to a more responsible technology design in which people are viewed as active beings in control of their tools for creative purposes rather than as automatons whose operations are to be automated away, or nodes whose rights to privacy and dignity are not guaranteed."
    • "But as I have noted, situated action models, perhaps inadvertently, may present people as reactive ciphers rather than fully cognizant human actors with self-generated agendas."

Deciding Among the Three Approaches

  • "The situated action perspective has provided a much-needed corrective to the rationalistic accounts of human behavior from traditional cognitive science."
  • "Distributed cognition has shown how detailed analyses that combine the formal and cognitive properties of artifacts with observations on how artifacts are used can lead to understandings useful for design."
  • "Activity theory and distributed cognition are very close in spirit, as we have seen, and it is my belief that the two approaches will mutually inform, and even merge, over time"
  • "Situated action models make it difficult to go beyond the particularities of the immediate situation for purposes of generalization and comparison."
    • "There has been a continuing aversion to incorporating the subjective in situated action models, which have held fast in downplaying consciousness, intentionality, plans, motives, and prior knowledge as critical components of human thought and behavior"

Methodological Implications of Activity Theory

  1. A research time frame long enough to understand users' objects
  2. Attention to broad patterns of activity rather than narrow episodic fragments that fail to reveal the overall direction and import of an activity
  3. The use of a varied set of data collection techniques including interviews, observations, video, and historical materials, without undue reliance on any one method (such as video).
  4. A commitment to understanding things from users' points of view - the practical need for getting the 'natives' point of view.

Hutchins (1995). How a Cockpit Remembers Its Speeds

Note: Most of this paper is covered over in lecture notes.

Applying the Cognitive Frame to the Cockpit System

"I will attempt to show that the classical cognitive science approach can be applied with little modification to a unit of analysis that is larger than a person"

Three Descriptions of Memory for Speeds

  1. Prepare landing data - extract from long term memory (book, speed cards).
  2. Descent - slow down to appropriate speed for each stage of descent.
  3. Final Approach - challenge/response pair, “Flight instruments and bugs/ Set and cross-checked.”

A Cognitive Description of Memory for Speeds - Representations and Processes Outside the Pilots

  • speed card - long term memory
  • values are represented as spoken words when the PNF tells the PF what the speeds are,
  • speeds are represented on the airspeed indicator in the positions of the speed bugs.
  • "Salmon bug" - provides the speed reference for both pilots, as both have speed-related tasks to perform.

"Setting the speed bugs is a matter of producing a representation in the cockpit environment that will serve as a resource that organizes performances that are to come later."

A Cognitive Description of Memory for Speeds - Representations and Processes Inside the Pilots

  • Computing the Speeds and Setting the Bugs - Cards provide this info
  • Using the Configuration Change Bugs
  • Using the Salmon Bug
  • The Pilot’s Memory for Speeds

Lustig, Konrad, Brubaker (2022). Designing for the Bitersweet: Improving Sensitive Experiences with Recommender Systems

Lustig, C., Konrad, A., & Brubaker, J. R. (2022). Designing for the Bittersweet: Improving Sensitive Experiences with Recommender Systems. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems.

Takeaways:

  • Facebook memories
  • 20 interview participants
  • Generally, most participants reacted positively to "bittersweet" content.
  • "Bittersweet" content, and when the encounter is (un)wanted:
    1. Expectedness - They often went to their News Feeds to see their friends and groups’ posts, not with the intent to refect or reminisce. Negative surprise.
    2. Context of viewing - time of day, location, and mood/headspace.
    3. “Affective sense-making” - Determining what emotion the recommender system was trying to convey
      • "They felt negatively when the information was presented to them in a way that suggested a recommender system had misin- terpreted bittersweet content as “happy” content."
      • "Participants did not feel that much could be done about seeing unwanted Memories because they felt the algorithms were just not “smart” enough to detect contextual factors that were highly individualized to them."
  • Recommendation:
    1. Draw inspiration from no/low-technology artifacts
    2. Use empirical research to identify contextual features that have negative impacts on users
    3. Conduct user studies on afective sense-making.
  • "technology-mediated refection (TMR) systems" - supported by recommender systems that curate past content and are designed to give users an opportunity to refect on the past
    • "Research has shown that TMRs are generally successful in having a positive impact on users’ well-being"
    • "studies have shown that TMRs provide well-being benefts for both positive and negative memories"
  • "5 main contributions to social computing research on the sociotechnical impacts of recommender systems that curate content related to people’s personal social lives":
    1. we uncover the complex emotions and experiences that people have with these systems,
    2. we introduce and explore the concept of “bittersweet” content, a sensitizing concept from our interviews,
    3. we introduce and explore the concept of “afective sense-making”,
    4. weidentifyopenchallengesforcreatingmorecompassionate and sensitive recommender systems that curate personal content, and
    5. we develop provocations and suggested practices for researchers and designers to address these challenges.
  • "These systems are particularly chal- lenged by the difficulty of identifying sensitive content that is not “good” or “bad” — in other words, the bittersweet content which exists in a liminal space that resists algorithmic categorization or quantifcation."
  • Problem: "it is difcult for many recommender systems to meaningfully incorporate user feedback about content that evokes complex emotions"
    • "Both implicit and explicit feedback suffer from performance challenges due to noisy data."
    • "Explicit feedback can be more meaningful than implicit feedback because it uses scales to indicate whether a user liked or disliked content (e.g., a user gives a song thumbs up or thumbs down); whereas, implicit feedback typically only gathers positive feedback"
    • Worse case scenario: unexpected encounters with sensitive content (e.g. image of deceased, ex)
  • Effects of reflection & nostalgia
    • "Redemption sequence" - positive, from "negative life scene to a positive or a triumphant one"
    • "Contamination sequence" - negative, from "affectively positive life scene to a negative one"
  • Actions taken depending on whether an encounter with bittersweet content was (un)wanted
    1. disengaging or venting
    2. discussing it with a select audience
  • Two challenges for the recomendation of bittersweet content
    1. Detection of Bittersweet Content
    2. Understanding Feedback on Bittersweet Content

  • Recommendation: Focus on increasing expectedness and user agency
    1. Finding: People want to be able to expect when they will see bittersweet content
      • "Consider scrapbooks: While flled with memories from the past, people make intentional choices to pull the scrapbook of the shelf"
    2. Design practice: examining the benefits and harms of expectedness and discovery
      • "In order to evaluate the trade-offs between expectedness and discovery, designers will have to consider the benefts and harms of algorithmically curated systems (which enable discovery) relative to their non-digital counterparts (which behave in more predictable ways)"
    3. Design implications: enabling expectations
      • Recommender systems could put sensitive content into its own special place for people to view when they are ready.
      • Provide users with opportunities to initiate expected encounters with sensitive curated content.
      • Provide more channels (and more frictionless sharing) to allow spaces for private groups to come together over the content.
    4. Provocation: How can designs draw inspiration from familiar low-tech and non-tech artifacts where sensitive content is common?
  • Designing for contextual factors: examining the relationship between context and sensitive content
    1. Finding: the context in which someone views sensitive content shapes whether it is upseting
      • "Specifcally, people found it to be a particularly dif- cult experience when they viewed Facebook in contexts that they felt were inappropriate for viewing emotionally intense content (e.g., while in bed in the morning, during a lull at work, or while parked in the car when waiting to pick someone up)."
    2. Design practice: Moving from simple signals to holistic and contextual feature sets.
    3. Design implications: Detecting contextual factors.
      • Make it clear to users that there are options to customize recom- mender systems to exclude certain content.
      • Change the time of day or location that sensitive content is shown.
    4. Provocation: How can we ameliorate negative aspects of en- countering bitersweet content when bitersweet content cannot al- ways be readily identified or negative outcomes predicted?
  • Designing for how humans understand problems: afective sense-making and computational tractability
    • (More stuff here...)

Gordon et al (2022). Jury Learning: Integrating Dissenting Voices into Machine Learning Models

Gordon, M., Park, J. S., Hancock, J., Bernstein, M. S., Lam, M. S., Patel, K., & Hashimoto, T. (2022). Jury Learning: Integrating Dissenting Voices into Machine Learning Models. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. https://programs.sigchi.org/chi/2022/index/content/68851

Figure 1: An overview of jury learning.

Problem: "For ML tasks ranging from online comment toxicity to misinformation detection to medical diagnosis, diferent groups in society may have irreconcilable disagreements about ground truth labels."

  • Majority rule overrides minority group's labels.
  • "practitioners are typically unaware of whether stakeholders for a particular deploy- ment or inference will disagree with a classifer’s decisions, because they haven’t modeled every annotator’s or group’s opinions"
  • "There remains a gap in providing algorithmic and interactive mechanisms that resolve the who, why, and decision rules of machine learning under societal disagreement."

Solution: jury learning - Supervised ML approach that resolves these disagreements explicitly through the metaphor of a jury: defning which people or groups, in what proportion, deter- mine the classifer’s prediction.

  • "To enable jury learning, we contribute a deep learning architecture that models every annotator in a dataset, samples from annotators’ models to populate the jury, then runs inference to classify."
  • "Our architecture enables juries that dynamically adapt their composition, explore counterfactuals, and visualize dissent."
  • "Jury learning is a supervised learning approach that asks practitioners to specify whose voices their classifers refect, and in what proportion."

"Jury"

  • "For our purposes, we refer to a jury as a bounded set of individ- uals whose opinions aggregate into a decision."
  • random sample from population of labelers based on "jury composition that the ML practitioner has articulated (e.g. 6 cons and 6 lib jurors)".
  • "Jury learning then algorithmically predicts how each of these twelve selected jurors would label the input, and then aggregates those responses into a decision"

Why majority-voice outcome occur in ML:

  1. Majority vote aggregation of the raw crowdsourced annotations overrides minority viewpoints in generating ground truth
  2. Even if training data points are disaggregated, the training algorithm minimizes its loss function by predicting accurately for the opinions held by the largest group of people in the dataset.

Difficulty of annotator agreement:

  1. for tasks such as those common in social computing contexts, much of the disagreement is likely irreducible (e.g. "“What does, and doesn’t, cross the line into harassment?”)

Use case:

  1. "Saanvi has created an online news-sharing social network, and wants to create a classifer to detect any instances of personal attacks on the platform. She fnds a popular, publicly available large-scale dataset, trains a model using the traditional supervised learning pipeline, and deploys it to her community. The classifer takes as input the text of a comment, and returns a “toxic” or “not toxic” label."
  2. "Saanvi suspects that perhaps her classifer isn’t making decisions in ways that refect the voices in her community."
  3. "First, Saanvi explores diferent jury confgurations to confrm any group-based diferences that she expects to see, inputting comments and exploring how the jurors in each group respond."
  4. "Saanvi and the other stakeholders observe that their intuitions of the proper jury composition change based on which groups might be targeted in that post: that when a news article is about women’s issues, they want more women on the jury; when a news article concerns LGBTQ+ rights, they want more jurors identifying as LGBTQ+; when an article is about a Black woman, they want more Black women on the jury."

Technical approach

  1. "Requires an approach that can make such predictions from a dataset of similar size and structure to those already in use when training supervised standard classifers"
  2. The only additional assumption we make is that any characteristic used to select jurors (e.g., gender identity) must exist for each juror.
  3. We propose a model architecture that jointly trains a content model for classifcation tasks (such as from BERT) alongside a deep recom- mender system.

(This papers keeps going...)

Seberger et al (2022). Still Creepy After All These Years: The Normalization of Affective Discomfort in App Use

Seberger, J. S, Swiatek, E., Shklovski, I. & Patil, S. (2022). Still Creepy After All These Years: The Normalization of Affective Discomfort in App Use. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. https://programs.sigchi.org/chi/2022/index/content/68804

Problem: "It is not well understood why people continue to use privacy- invasive apps they consider creepy."

  • "The privacy paradox refers to a phenomenon wherein people say they value privacy but act in ways that seem to show little concern for it "
  • "This “attitude-behavior gap” continues to complicate research- and design-based at- tempts to help people manage privacy in the digital world."

Question: We conducted a scenario-based study (n = 751) to investigate how the intention to use an app is infuenced by afective perceptions and privacy concerns.

Results:

  1. We show that creepiness is one facet of affective discomfort, which is becoming normalized in app use.
  2. The infuence is mitigated by other factors, including data literacy, views regarding app data practices, and ambiguity of the privacy threat.

Conclusion:

  1. Reject hypothesis 1: Perceived presence of privacy control is associated with increased intention to continue using a privacy-invasive app
  2. Accept H2: Perceived data literacy is positively associated with the intention to continue using a privacy-invasive app.
  3. Partially accept H3: Affective discomfort is negatively associated with the intention to continue using a privacy-invasive app
  4. Partially accept H4: Ambivalence toward the data practices of a privacy-invasive app is positively associated with the intention to continue using the app

    "Creepy data practices normalize the expectation that afective discomfort is part and parcel of using apps; tolerating them further normalizes such experiences."

In [ ]:
 

Week 8 - Experiments and Evaluation

MacKenzie, I.S. (2013). Chapter 5: Designing HCI Experiments. Human-Computer Interaction: An Empirical Research Perspective. (pp. 157-188). Waltham, MA: Elsevier.

Method:

The discussions that follow pertain only to experimental research and in particular to factorial experiments, where participants are exposed to levels of factors (test con- ditions) while their behavior (human performance) is observed and measured.

  • Based on experimental psychology

Ethics Approval

“researchers must respect the safety, welfare, and dignity of human participants in their research and treat them equally and fairly.”

Experiment Design

  • Think about what are the experimental variables - creates narrow and testable quetsions
  • Independent and dependent variables - think through them

Independent variables:

  • "An independent variable is a circumstance or characteristic that is manipu- lated or systematically controlled to a change in a human response while the user is interacting with a computer."
  • independent var also called factor
  • Experiments designed with independent variables are often called factorial experi- ments.
  • independent variable can be many things: computer system attributes, human characteristics, environmental circumstance, etc.
  • Independent variables tips:
    • when formulating an independent variable, express it both in terms of the circumstance or characteristic itself as well as the levels of the circumstance or characteristic chosen for testing (e.g. "interaction stance" and "sitting/standing/walking")
    • Once the name of the independent variable and the names of the levels are decided, stick with these terms consistently throughout a paper
  • Multiple independent vars: "A design with a single independent variable includes a main effect, but no inter- action effects."

Dependent variables

  • Definition: Measured human behavior (e.g. speed / accuracy)
  • Measurements - "So when it comes to dependent variables, it is acceptable to “roll your own.”

Other variables: control, random, and confounding

  • Control - variables kept the same
  • Random - variables allowed to vary randomly. For generalizability.
  • Confounding - variables that may also affect the dependent variable. Prior experience is a big one.

Task and procedure

  • "There are two objectives in designing a good task: represent and discriminate."
  • A good task is representative of the activities people do with the interface (external validity)
    • Cons: "The more representative the task, the more the task is likely to include behaviors not directly related to the interface or interaction method under test."
  • A good task is also one that can discriminate the test conditions. (Attuned to measure points where differentiation will actually occur).
  • "The best task is one that is natural yet focuses on the core aspects of the interac- tion: the points of differentiation between the test conditions"

Participants

Conditions for generalizing from participants

  1. People actually tested must be members of the same population of people to whom results are assumed to hold
  2. A sufficient number of participants must be tested
    • Warning - Can always achieve statistical significance if enough participants are used.
    • "Using more participants seems like a good idea, but there is a downside. If there truly is an inherent difference in two conditions, then it is always possible to achieve statistical significance"
    • How to select best size? A: Study published research.

Sampling

  • "convenience sampling compromises the external validity of the research, since the true population is somewhat narrower than the desired population"
  • mitigate via questionnaires and gather demographic data

Questionnaire Design

  • generally use ratio-scale response
  • "ordinal data are inherently lower quality than ratio-scale data, since it is not possible to compute the mean or standard deviation."

Within-subjects and between-subjects

  • Within subjects: each participant is tested on each level
    • gets more samples since everyone is tested on each treatment
  • Between subjects: separate group of participants is used for each test con- dition
    • Con: more participants needed
    • Pro: Avoid interference between test conditions
  • HCI tends to favor within-subjects designs
  • "It is worth noting that in many areas of research, within-subjects designs are rarely used. Research testing new drugs, for example, would not use a within- subjects design because of the potential for interference effects. Between-subjects designs are typically used."
  • Mixed design: assign the levels of one factor within-subjects and the levels of the other factor between-subjects

Order effects, counterbalancing, latin squares

_"Order effects" - Confounding variables based on ordering that may improve/worsen performance:

  • Practice effect - "In most within-subjects designs, it is possible—in fact, likely—that participants’ performance will improve as they progress from one test condition to the next."
  • Fatigue effect - "performance will worsen on conditions that follow other conditions"

Counterbalancing - mitigating technique to compensate order effect by dividing participants into groups and administer the conditions in a different order for each group

Latin squares - way to mitigate order effect (see chapter, confusing)

  • "To counterbalance for learning effects, participants are divided into three groups with the tasks administered according to a Latin square (see Figure 5.7b)."
  • A simpler way is to just randomize the order of conditions.
    • Works if task is brief, many repeats of task, and many test conditions

Group effects and asymmetric skill transfer

"If the learning effect is the same from condition to condition in a within-subjects design, then the group means on a dependent variable should be approximately equal."

Group effect : Differences across groups in the mean scores on a dependent variable

  • typically due to asymmetric skill transfer, ie. differences in the amount of improvement, depending on the order of testing.
  • Best way to avoid this is to use between-subject design

Longtitudinal Studies

"Performance trends in longitudinal studies, as shown in Figure 5.15, are often accompanied by an equation and best-fitting curve demonstrating the power law of learning."

  • The goal of a longitudinal study is to compare the viability of a new technique against current practice
  • "From a practical standpoint, learning a new technique bears a cost, since performance is initially superior with the current technique. However, after the crossover point is reached, the new tech- nique provides a benefit, since performance is superior compared to current practice."

Running the experiment

  • Always run a pilot test
  • Performing tasks reasonably quick and with high accuracy, but at a rate comfortable to the individual, is usually the goal.

Nielsen, J., & Molich, R. (1990, March). Heuristic evaluation of user interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 249-256). ACM.

Abstract:

Heuristic evaluation is an informal method of usability analysis where a number of evaluators are presented with an interface design and asked to comment on it. Four experiments showed that individual evaluators were mostly quite bad at doing such heuristic evaluations and that they only found between 20 and 51% of the usability problems in the interfaces they evaluated. On the other hand, we could aggregate the evaluations from several evaluators to a single evaluation and such aggregates do rather well, even when they consist of only three to five people.

Heuristic evaluation

  • Definition: Looking at an interface and trying to come up with an opinion about what is good and bad about the interface

Evaluation:

  • Given a task with inherent problems, users did heurisitic evaluation to point out problems
  • The average proportion of usability problems found was 51%., 38%, 26%, and 20% in the four experiments respectively. Even in best case, only half of issues were found.
  • Aggregating across the heuristica evaluation led to better coverage of identifying usability problems.

(CHI) Zhang, W. & Lim, B. Y. (2022). Towards Relatable Explainable AI with the Perceptual Process. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems.

Problem: ML models need to provide "contrastive explanations". Current explanations are not great, since they rely on rudimentary comparisons between examples or raw features.

Solution: XAI Perceptual Processing Framework and RexNet model.

  • Used on vocal emotion recognition
  • multi-task deep neural network, predict and explain emotions from speech.
  • Counterfactual explanations were useful and further enhanced with semantic cues, but not saliency explanations

(CHI) Elsden, C., Duggan, M., Thornton, P., Chatting, D. & Dwyer, A. C. (2022). Zoom Obscura: Counterfunctional Design for Video-Conferencing. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems.

What:

"we supported and collaborated with seven independent artists to explore technical and creative interventions in video-conferencing"

  • Works on interventions that help users counter/regain agency in which personal data is captured in video-conferencing tools.
  • Post-hoc analysis of how each of 7 projects employ aspects of counterfunctional design.

What is counterfactual:

“a thing that exhibits features that counter some of its own ‘essential functionality’ while nonetheless retaining familiarity as ‘essentially that thing’”

Conclusion - Artists used counterfunctional strategies like:

  1. reducing one’s visibility within a meeting
  2. resisting the boundaries of a Zoom ‘window’;
  3. countering the centrality of the face;
  4. switching of certain features entirely; and
  5. introducing new temporalities within a Zoom meeting

Week 9 - Artifacts, Interfaces, and Politics

Winner, L. (1980). Do Artifacts Have Politics? In Daedalus 109(1). (pp. 121-136). MIT Press.

Read notes here: https://docs.google.com/document/d/1ePSb1ZEvXdCpsRHgRtSz30Enb2NlhFLI/edit#heading=h.gjdgxs

Cowan, R. S. (1976). The “industrial revolution” in the home: Household technology and social change in the 20 century. Technology and Culture 17(1). (pp. 1-23). Johns Hopkins University Press.

"The standard sociolog- ical model for the impact of modern technology on family life clearly needs some revision: at least for middle-class nonrural American families in the 20th century, the social changes were not the ones that the standard model predicts. In these families the functions of at least one member, the housewife, have increased rather than decreased; and the dissolution of family life has not in fact occurred."

"Our standard notions about what happens to a work force under the pressure of technological change may also need revision. When industries become mechanized and rationalized, we expect certain general changes in the work force to occur: its structure becomes more highly differentiated, individual workers become more specialized, managerial functions increase, and the emotional context of the work disappears."

  • Workforce actually condensed a lot of domestic work to housewives.
  • Inidividual workers became less specialized

"Finally, instead of desensitizing the emotions that were connected with household work, the industrial revolution in the home seems to have heightened the emotional context of the work, until a woman's sense of self-worth became a function of her success at arranging bits of fruit to form a clown's face in a gelatin salad."

  • Workers putting far more emotional weight to its own inherent value. Especially amongst housewives.

Friedman, B., Kahn Jr, P. H., Borning, A., & Huldtgren, A. (2013). Value Sensitive Design and Information Systems. In P. Zhang & D. Galletta (Eds.) Human-Computer Interaction in Management Information Systems: Foundations. New York: M.E. Sharpe, Inc.

Value sensitive design

  • Definition: "theoretically grounded approach to the design of technology that accounts for human values in a principled and comprehensive manner throughout the design process"
  • Value: "what a person or group of people consider important in life" - not economic value

Case studies:

  1. information and control of web browser cookies, implicating the value of informed consent.
  2. high-definition plasma displays in an office environment to provide a ìwindowî to the outside world, implicating the values of physical and psychological well-being and privacy in public spaces
  3. integrated land use, transportation, and environmental simulation system to support public deliberation and debate on major land use and transportation decisions, implicating the values of fairness, accountability, and support for the democratic process, as well as a highly diverse range of values that might be held by different stakeholders, such as environmental sustainability, opportunities for business expansion, or walkable neighborhoods

Tripartite Methodology:

  1. Conceptual investigation
    • "Who are the direct and indirect stakeholders affected by the design at hand? How are both classes of stakeholders affected? What values are implicated?"
    • Trust stem from goodwill, lack of harm.
  2. Empirical investigation - Evaluate the success of a particular design.
    • "How do stakeholders apprehend individual values in the interactive context? How do they prioritize competing values in design trade-offs? How do they prioritize individual values and usability considerations?"
    • Includes observations, interviews, surveys, experimental manipulations, collection of relevant documents, and measurements of user behavior and human physiology.
  3. Technical investigation
    • Focus on how existing technological properties and underlying mechanisms support or hinder human values
      • e.g. video-collab systems - blur or not blue? Depends on values of privacy vs awareness.
    • Involve the proactive design of systems to support values identified in the conceptual investigation

Human values often implicated in system design:

  1. Human welfare
  2. Ownership and property
  3. Privacy
  4. Freedom from bias - Refers to systematic unfairness perpetrated on individuals or groups, including pre-existing social bias, technical bias, and emergent social bias
  5. Universal usability - Refers to making all people successful users of information technology
  6. Trust
  7. Autonomy
  8. Informed consent
  9. Accountability
  10. Courtesy
  11. Identity
  12. Calmness
  13. Environmental sustainability

(GT) Liu, Z., Nersessian, N., & Stasko, J. (2008). Distributed cognition as a theoretical framework for information visualization. IEEE Transactions on Visualization and Computer Graphics, 14(6). (pp. 1173-1180).

Problem: "Information visualization research lacks encompassing theories"

Solution: Distributed cognition framework can be used to substantiate the theoretical foundation of InfoVis.

Contributions:

  1. "We highlight the view that cognition is more an emergent property of interaction than a property of the human mind"
    • In studying cognition, we cannot thus separate the environment from the human, neither can we separate perception and action from cognitive processes, as with the traditional view.
  2. "We argue that a reductionist approach to study the abstract properties of isolated human minds may not be useful in informing InfoVis design"
  3. "We propose to make cognition an explicit research agenda, and discuss the implications on how we perform evaluation and theory building"

Conclusion

  • On disributed cognition (DCog): "We have been arguing that DCog provides a more useful framework to address the central issues of representation and interaction than does the traditional cognitive science framework. It provides the kinds of inter- pretive concepts and methods needed to analyze the role of InfoVis in complex cognitive tasks. While it has limitations in providing pre- scription and prediction, it is useful in helping us understand phenom- ena that are important for InfoVis, and our designs can be informed with this understanding."

(GT) Kidd, C., Orr, R., Abowd, G., Atkeson, C., Essa, I., MacIntyre, B., Mynatt, E., Starner, T. & Newstetter, W. (1999). The aware home: A living laboratory for ubiquitous computing research. In N. Streitz, S. Konomi, & H. Burkhardt (Eds.) Cooperative Buildings: Integrating Information, Organizations, and Architecture (pp. 191-198).

Problem: New research challenges as computation is occurring away from desktop.

Solution: Create an experimental home (Aware Home). Knows info about itself and whereabout/activities of its inhabitants.

Prototype Home

  • 2 mirrored floor plan. Why? Control and treatment.
  • Based on previous project (Classroom 2000) - One of the main goals of that project has been to instrument an actual classroom environment to enable the recording of live lectures

Tech-centered research agenda

  1. Context Awareness and Ubiquitous Sensing - build sensors that can help a compu- tational environment to interpret and begin to understand the contextual cues of its occupants
  2. Individual Interaction with the Home - Wearable sensors + home = "delivery of convenient, personalized information and entertainment services at almost any time and in any context"
  3. Smart Floor - Identify who is in the house, and where
  4. Finding lost objects - self-explanatory

Human-centered research agenda

  1. Support for elderly - "The current practice of institutionalizing elderly people into assistive living centers is expensive and often an unsatisfactory experience for all involved."
    • Moving out to some form of assisted living provides the security of frequent monitoring and the availability of medical assistance in the event of an emergency. If these people were able to keep that "peace of mind" while still living in their own homes, they would not be forced away from the familiarity and friends to which they are accustomed.
    • Solution: Design a system that provides the type of monitoring offered by assisted living centers.
  2. Evaluation and social issues - Privacy

(GT) Hu, A., Chancellor, S., & De Choudhury, M. (2019). Characterizing Homelessness Discourse on Social Media. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1-6).

Goal: Provide insights to understand how homeless bloggers express themselves.

How: Computational linguistic analysis of a large corpus of Tumblr blog posts.

Results:

  • Psycholinguistic analysis: tokens anger, sadness, and anxiety were significantly higher in the homeless group by 12-54% compared to the other
  • Hashtag use: financial hardship ('poverty'), mental health ('suicide'), professional distress ('jobless'), practical challenges ('housing').
  • Homeless collected donationis through Tumblr.

(GT) Kozubaev, S., Rochaix, F., DiSalvo, C., & Le Dantec, C. (2019). Spaces and Traces: Implications of Smart Technology in Public Housing. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM.

Problem: How does smart home technologies fit in to the context of public housing in the US

Solution: Led participatory design workshops with residents/building managers.

Results:

  • Tracking and monitoring
    • Health and wellness - personal devices help those with chronic conditions
    • Concern for personal safety - Tracking mechanism if kidnap occurs. Smart street lamps, security monitoring.
  • Lack of trust for these devices are high.
    • Security welcome in public space, but not in private.
  • Shifting baselines - how changes born of smart tech may lead to distrust.
    • Infrastructure vs household responsibilites - who is footing the bill for this tech?
In [ ]:
 

Week 10 - Evaluation and Agile Development

Polson, P. G., Lewis, C., Rieman, J., & Wharton, C. (1992). Cognitive walkthroughs: a method for theory-based evaluation of user interfaces. International Journal of Man-Machine Studies, 36(5). (pp. 741-773).

Cognitive walkthrough

  • Definition: "One of our major goals in proposing the cognitive walkthrough is to develop a theoretically-based design methodology that can be used in actual development situations."
    • Focuses on the ease of learning
    • "Typical design guidelines, e.g. Smith and Mosier (1986) or Rubenstein and Hersh (1984), provide very general recommendations that are not specific enough to guide the development process."
  • Validity: "Claims that a given step would not cause any difficulties must be supported by theoretical arguments, empirical data, or relevant experience and common sense of the team members."

Theory of learning by exploration

  • "A cognitive walkthrough evaluates the ease with which a typical user can successfully perform a task using a given interface design."
  • Focus is on user able to explore the interface and figure things out.
Theoretical Foundations
  • Based on Norman's theory of action - series of stages beginning with user's goal, leading to the generation of a plan / execution / evaluation / etc of goal.
Model
  • Based on Kintsch's construction-integration model
  • "describes the processes by which users integrate a representation of text or other perceptual input with background knowledge to construct a representation that will enable them to perform a task"
  • Goal structures similar to GOMS model
  • Actions lead to goals
  • Linked by propositions
  • Not all actions must be linked to goals
Management of Goal Structure
  1. Generate goal structure
    • Initially incomplete. User has to figure out goal, and actions necessary.
    • "Fragments of the goal structure are generated by the user as he or she interacts with the interface while attempting to carry out the task."
  2. Generate goals for actions
    • Interface components interact with user's background knowledge to generate explicit goals.
    • "For example, a user with a goal of “play my phone messages” may see the prompt “press P to play messages” and form the goal of “Press the ‘P’ button.”"
  3. Interpreting feedback
    • user must interpret the system’s response to determine whether some current goal has been accomplished,
  4. The "and-then" goal structures
    • "In many tasks and in interactive dialogs, goals are not posted individually but as part of a structure that represents a goal and an associated sequence of subgoals that must be accomplished in a fixed order."
    • User need to tie together subgoals - this structure is called "and-then"
    • "We call this goal structure an “and-then” structure because it indicates that the original goal is to be achieved by accomplishing, in order, the first subgoal and then the second subgoal."

Cognitive walkthrough procedure

2 phases - preparation and evaluation.

  1. Preparation - evaluators select a suite of tasks that are a representative sample of the tasks that the application is intended to support.
  2. Evaluation - the interaction between the user and the interface is analysed in depth.

Analysis:

  1. what goals the user should have leading up to the action
  2. whether the prompts and labels of the interface will induce the user to take the correct action, assuming the correct goals
  3. how the user’s goals will change in response to the feedback from the interface after the action is performed.

Example of cognitive walkthrough

(This one keeps going with an example cognitive walkthrough. Quite long)

Wania, C. E., Atwood, M. E., & McCain, K. W. (2006, June). How do design and evaluation interrelate in HCI research? In Proceedings of the 6 Conference on Designing Interactive Systems. (pp. 90-98). ACM.

Problem: "In HCI there are authors that focus more on designing for usability and there are authors that focus more on evaluating usability. The relationship between these communities is not really clear."

Solution: "We use author cocitation analysis, multivariate techniques, and visualization tools to explore the relationships between these communities."

  • How: "we analyze the citations in the HCI literature over a fourteen-year period in a database of over ten million documents and employ bibliographic cocitation analysis to uncover the understanding implicitly used by these authors."
  • How connected are design and evaluation? They ran, "author cocitation analysis to reveal how authors who focus on design and evaluation cite one another."

Results: "seven clusters that could be identified as:"

  1. Design Theory and Complexity,
  2. Design Rationale - Fuzzy, "design rationale may mean different things to the different clusters on the map"
  3. Cognitive Theories and Models - focuses on understanding the users, how they accomplish tasks, and why they think something is usable.
  4. Cognitive Engineering - focuses on the cognitive properties of people and how these properties influence people’s interactions with elements in an environment.
  5. Computer-Supported Cooperative Work (CSCW) - focuses on building systems that enable and enhance collaboration and cooperation among groups in the workplace
  6. Participatory Design - Focuses on workplace democracy and human development
  7. User-Centered Design - places the user at the center of the design process

Data Analysis

  • "Many authors in the field of HCI have begun to focus on design and evaluation as an evolutionary process situated in the context of use"

Cocitation Map

Who's at the center of the HCI community?

  • A: Gerhard Fischer, Terry Winograd, Lucy Suchman, Ed Hutchins, and Gary Olson
  • Under cluster, "cognitive theories and methods"
  • What ties them together? "We believe that the central theme that ties these five authors together is a focus on the context of use of systems."

What's the next hot topic in HCI?

  • Based on authors at the center
  • " We predict, therefore, that the next hot topic in HCI will be a focus on understanding design and evaluation in the context of use."

Chamberlain, S., Sharp, H., & Maiden, N. (2006). Towards a framework for integrating agile development and user-centered design. In Proceedings of the 4 International Conference on Extreme Programming and Agile Processes in Software Engineering. (pp. 143-153). Springer.

Problem: People thing user-centred design and agile can be integrated as one. But they're different and these differences make the use of these methods on development projects hard.

Solution: Field study to investigate use of agile methods alongside UCD in one org.

  • Provides 5 principles for integrating UCD and agile

Similarities and Differences Between UCD and Agile Development

Similarities:

  • Both rely on "iterative development process, building on empirical information from previous cycles"
  • Both places heavy emphasis on the user's role in the development process
  • Both emphasize the importance of "team coherence" - whole team should have the user in mind while developing the product.

Diffs:

  • Agile seek minimal documentation
  • Agile starts building before completely knowing the users

Evaluation

  • Observed media company trying to integrate agile and UCD

Four Themes:

  1. User involvement - Through our observations, user involvement was characterised as being where:
    • the users were invited to give opinions or test prototypes
    • the users were interviewed, observed or questioned for research purposes
    • the user’s interaction with the product was considered in detail
  2. Collaboration and culture - Collaboration was observed with relation to:
    • The collaboration between individuals within the team
    • Specifically, the collaboration between designers and developers
    • The culture that the chosen methodology created
  3. Prototyping - different ways of prototyping
  4. Project lifecycle - different ways of lifecycle

5 Principles for Integrating UCD and Agile Development

  1. User Involvement – the user should be involved in the development process but also supported by a number of other roles within the team, such as having a proxy user on the team.
  2. Collaboration and Culture – the designers and developers must be willing to communicate and work together extremely closely, on a day to day basis. Likewise the customer should also be an active member of the team not just a passive bystander.
  3. Prototyping – the designers must be willing to “feed the developers” with prototypes and user feedback on a cycle that works for everyone involved.
  4. Project Lifecycle – UCD practitioners must be given ample time in order to discover the basic needs of their users before any code gets released into the shared coding environment.
  5. Project Management – Finally, the agile/UCD integration must exist within a cohesive project management framework that facilitates without being overly bureaucratic or prescriptive.

Conclusion

Why Agile/HCI methodologies go at odds:

  • Power struggles between developers and designers
  • Time differences between designers’ and developers’ capacity to create tangible outcomes from each iteration round. Development usually takes more time
  • Communication issues if members of the team don’t take part in some elements/phase of the project
  • A reluctance to understand the needs of each element of the project
  • The extent to which the user is able/willing to contribute to the project

These can be overcome if:

  • There is some balancing role or mechanism put in place to ensure that each discipline has equal power on the team
  • Resource management and project management ensures the management of time and resources equate to utilised resources that don’t become frustrated whilst waiting for results
  • All members of the project team are available/involved at each key point of the project
  • The user plays a part in the project so that their requirements are catered for and that the end-product works in a realistic situation

(GT) Deeb-Swihart, J., Endert, A., & Bruckman, A. (2019). Understanding Law Enforcement Strategies and Needs for Combating Human Trafficking. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM.

Problem: In human-trafficking, With terabytes of available data such as sex work ads, polic- ing is increasingly a big-data research problem.

Solution: Understand computational needs of law enforcement

Results: 3 major areas where HCI can help:

  1. InfoViz of large, complex, geospatial data
  2. Unified information dbs raise issues of usable security and privacy
  3. Current IT systems are "archaic", raise policy issues wrt resource allocation for software development.

How:

  • Semi-structured interviews with 16 law enforcement personnel working on human trafficking cases
  • Expertise varied amongst participants. At least familiar with majority of case managements tools / police DBs.

Findings:

  • Most start out the same: proactive or reactive.
    • "proactive" - look for online ads for sex work
    • "reactive" - start with informant's tips (local community via anonymous tips, social media, hotlines, other police departments)

Tools used during investigation:

  1. Police databases
  2. Human-trafficking specific tools (Thorn's spotlight, Marinus Analytitics' Traffic Jam)
  3. Visual Analysis Tools - ArcGIS / Powerpoint / Analyst Notebook
  4. Case Organization Tools
  5. General websites

Computational Needs:

  1. Tools to support collaboration - lack of shared data and the problem in identifying key partnerships especially across state lines.
  2. Tools to Support Case Building and Organization
    • "With investigators working multiple complex cases at the same time, it can be hard to keep track of all the relevant information for a particular case with (physical) systems"
  3. Tools to Support Pattern Identification and Forecasting in Geospatial-Temporal Data
    • "investigators need to map out where a person has been at what times for a number of reasons including proving travel in court"
    • "While some had access to tools like ArcGIS for this purpose, the data the investigators have access to is not in a format that can easily be ported into ArcGIS."
  4. Tools that Unify Existing Software and Methods
    • "Because all these tools are disconnected, investigators end up spending a lot of time keeping track of the connections across each of the tools."

Discussion

  • "When designing tools for law enforcement, it is important to choose algorithms that are human interpretable and design visualizations that help officers get an intuition for how the process works"
  • "training, funding, and experience with advanced technology is unequally distributed across police departments."

(GT) Shahmiri, F., Chen, C., Waghmare, A., Zhang, D., Mittal, S., Zhang, S., Wang, Y., Wang, Z., Starner, T., & Abowd, G. (2019). Serpentine: A Self-Powered Reversibly Deformable Cord Sensor for Human Input. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM.

What:

  • a highly stretchable self-powered sensing material that can recognize human input based on de- formations of its shape
  • can detect different types of interactions with it.
  • Significance: "These computational materials are a new way to think about the advancement of computing, march- ing in a different way closer to Weiser’s initial vision of ubiquitous computing"

Contributions:

  • A description of how to manufacture the Serpentine sensor inexpensively
  • A discussion of the physical operating principles of the system and parameters that can be used to tune performance
  • A recognizer that can distinguish six different interac- tions with Serpentine
  • A 12 participant quantitative and qualitative study on interacting with Serpentine

Evaluation:

  • "we conduct a user study with 12 participants and 6 interactions. We were able to achieve 95.7% accuracy with user-dependent and 92.17% accuracy for user-independent models. We later discussed user feedback about Serpentine and suggest design parameters for defining new interactions."

(GT) Wong-Villacres, M., Kumar, N., & DiSalvo, B. (2019). The Parenting Actor-Network of Latino Immigrants in the United States. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM.

Problem: Technology to support parents from nondominant groups in positively impacting their children's education is underexplored.

Solution: Use Actor-Network Theory (ANT) to analyze sociotechnical view of Latino Spanish speaking immigrants in the US - and how they form alliances with other actors (ie. teachers) and technology to exchange info to enrich their children's education.

  • "The use of ANT allowed us to advance work on parenting in HCI by providing a deeper understanding of the reasons—including attributes embedded in technology—impacting the quality of information channels in the parental engagement network of a nondominant group."

Actor-Network Theory:

  • ANT: "To analyze this sociotechnical system and identify the quality of the information exchanges that it supports, we use the ANT theoretical approach. ANT fundamentally rejects dwelling on the analysis of what is social and what is technical [12, 44]. Instead, it attempts a deep understanding of how human and nonhuman actors align their interests and goals to form and maintain networks of alliances or associations"
  • "ANT’s focus on interests’ alignment allows uncovering the actions actors are willing to perform to trans- late their interests as well as to persuade others"

Methodology:

  • ethnographic study, 30 parents (ages 22-45)
  • interviews / participant observations

4 main categories of actors in parenting network

  1. Familial unit
    • mothers (trusting / resourceful / insecure)
  2. Schooling environment
    • teachers (publishing infor, PTAs)
    • biliangual parent liaisons, other parents
  3. Community at-large
    • close relations (indirectly broadens parents' access to education information)
    • supporting organizations (Open organizations, specialized organizations, religious institutions)
    • everyday people - knowledge from general community
  4. Technology
    • Everyday technologies - smartphones, social media
      • "The closeness and familiarity these apps (Whatsapp, etc) afford align better with parents’ everyday needs and goals." - better than school emails, websites
    • School-related technologies - "Teachers have also formed alliances with parent-classroom com- munication technologies such as ClassDojo, Seesaw, Parent portals, and weekly newsletter emails, all of which keep parents updated on kids’ activities in the classroom,"

Design Opportunities:

  1. Designing to engage, not impose
    • Use common tools that parents already use
    • "A way to increase parents’ engagement with parent-school media, thus, could be to enhance everyday tech- nologies so that they can provide support to parental engagement practices. For example, Google Translate could be augmented to help parents learn more about homework materials"
  2. Generate meaning at scale
    • intelligent agents could curate information from the larger community and offer it to parents in the form of timely, di- gestible suggestions. These agents could also converse with parents to address doubts, provide contexts and anticipate needs.
  3. Personalizing Detached Information
    • "explore mo- tivating teachers and parents to share personal information with each other, considering their time constraints and privacy concerns."

(GT) Joyner, D. (2019). The CHI of Teaching Online: Blurring the Lines Between User Interfaces and Learner Interfaces. In E. Kapros & M. Koutsombogera (Eds.) Designing for the User Experience in Learning Systems, Human-Computer Interaction Series. Springer.

Analysis:

  1. Case study on a graduate-level human-computer interaction class
  2. Evaluate - examine how the class’s design implements usability principles like equity, flexibility, and consistency
  3. Illustrate intersection of interface design and learning design - where do they not overlap?

"the classroom itself is a user interface"

  • virtual class = interface
  • "asynchronous learning environments must use these computational interfaces to create the same effects through different mechanisms"

Simlarities in learning design and UI design:

  1. Value of rapid feedback is high for both

Competing nature between the two:

"different objectives of the two design paradigms—one to support immediate interaction, the other to support long-term learning gains—mean that the application of one paradigm’s heuristics and guidelines to the other must be performed carefully."

  1. "Desirable difficulties" - interface designer might make the learning experience too easy.

Design Principles:

"we take four common design principles or theories from the HCI literature—flexibil- ity, equity, consistency, and distributed cognition—and examine their applications to the design of this online course"

  1. Flexibility
    • Geographic flexibility
    • Temporal flexibility
    • Preference flexibility - mixing and matching videos / assignments how students see fit. Also with participation policy.
  2. Equity
    • Equity through flexibility
    • Equity through admissions - "any student who meets the minimum requirements is admitted"
    • Equity through anonymity - "students have considerable control over what portions of their identity they reveal to classmates and instructors"
      • equity for those with disabilities / deformities
  3. Consistency
    • Assignment cadence - "what s there to motivate students to remain engaged long before assessments are due?". "Regular deadlines, force the majority of students to remain engaged with the course on a weekly basis"
    • Announcement cadence - Reminders that the class is ongoing. Important for asynchronous classroom.

Distributed cognition

  1. Offloading through announcements
    • "offload the attention students may otherwise spend thinking about course procedures, assignment deadlines, and so on onto the interface, allowing them instead to focus on the course material"
    • "whereas traditional in-person courses operate on a “push” structure, online courses emphasize a “pull” structure"
  2. Offloading through documentation
    • What information should be incorporated into the course’s fundamental documentation, and what should be pushed to students through announcements and discussions?
    • Errs on heavy documentation - students will help each other figure out the policies, ie distributed knowledge.
  3. Offloading through assessment design
    • Open-anything tests
    • "it is the system comprised of the student, their resources, and their environment that is being assessed on the test rather than just their cognition"

Additional principles:

  1. Structure
    • "Organizing lecture material into pre-produced videos allows the presentation of it in a way that brings out the underlying structure of the content rather than forcing it into a prescribed lecture schedule"
  2. Perceptibility
    • Consistent access to gradebook
  3. Tolerance
    • "interface design and learning design can merge to create a tolerance for mistakes more related to administration and policies instead of content errors"
    • e.g. two-hour grace window after the official deadline where submissions are still accepted
    • general leniency with late submissions
  4. Feedback
    • Heavy emphasis on streamlining grading workflows - results in quick turnaround.
    • Peer review - participation grade ensures feedback will arrive rapidly.

Evaluation findings (Likert scale feedback):

  1. First, the ratings of course effectiveness and quantity learned have not changed semester to semester.
  2. Second, reviews suggest that the design decisions described herein are at least somewhat effective in supporting the student experience
    • students specifically comment positively on criteria that typically are considered lacking in online courses
In [ ]: