• HCI Deep Dives

  • Auteur(s): Kai Kunze
  • Podcast

HCI Deep Dives

Auteur(s): Kai Kunze
  • Résumé

  • HCI Deep Dives is your go-to podcast for exploring the latest trends, research, and innovations in Human-Computer Interaction (HCI). Auto-generated using the latest publications in the field, each episode dives into in-depth discussions on topics like wearable computing, augmented perception, cognitive augmentation, and digitalized emotions. Whether you’re a researcher, practitioner, or just curious about the intersection of technology and human senses, this podcast offers thought-provoking insights and ideas to keep you at the forefront of HCI.
    Copyright 2024 All rights reserved.
    Voir plus Voir moins
activate_Holiday_promo_in_buybox_DT_T2
Épisodes
  • Seeing our Blind Spots: Smart Glasses-based Simulation to Increase Design Students’ Awareness of Visual Impairment
    Oct 6 2024
    As the population ages, many will acquire visual impairments. To improve design for these users, it is essential to build awareness of their perspective during everyday routines, especially for design students. Although several visual impairment simulation toolkits exist in both academia and as commercial products, analog, and static visual impairment simulation tools do not simulate effects concerning the user’s eye movements. Meanwhile, VR and video see-through-based AR simulation methods are constrained by smaller fields of view when compared with the natural human visual field and also suffer from vergence-accommodation conflict (VAC) which correlates with visual fatigue, headache, and dizziness. In this paper, we enable an on-the-go, VAC-free, visually impaired experience by leveraging our optical see-through glasses. The FOV of our glasses is approximately 160 degrees for horizontal and 140 degrees for vertical, and participants can experience both losses of central vision and loss of peripheral vision at different severities. Our evaluation (n =14) indicates that the glasses can significantly and effectively reduce visual acuity and visual field without causing typical motion sickness symptoms such as headaches and or visual fatigue. Questionnaires and qualitative feedback also showed how the glasses helped to increase participants’ awareness of visual impairment. https://dl.acm.org/doi/10.1145/3526113.3545687
    Voir plus Voir moins
    7 min
  • Emolleia – Wearable Kinetic Flower Display for Expressing Emotions
    Oct 5 2024

    What we wear (our clothes and wearable accessories) can represent our mood at the moment. We developed Emolleia to explore how to make aesthetic wears more expressive to become a novel form of non-verbal communication to express our emotional feelings. Emolleia is an open wearable kinetic display in form of three 3D printed flowers that can dynamically open and close at different speeds. With our open-source platform, users can define their own animated motions. In this paper, we described the prototype design, hardware considerations, and user surveys (n=50) to evaluate the expressiveness of 8 pre-defined animated motions of Emolleia. Our initial results showed animated motions are feasible to communicate different emotional feelings especially at the valence and arousal dimensions. Based on the findings, we mapped eight pre-defined animated motions to the reported, user-perceived valence, arousal and dominance and discussed possible directions for future work.

    https://dl.acm.org/doi/10.1145/3490149.3505581

    Voir plus Voir moins
    14 min
  • “I am both here and there” Parallel Control of Multiple Robotic Avatars by Disabled Workers in a Café
    Oct 4 2024

    Robotic avatars can help disabled people extend their reach in interacting with the world. Technological advances make it possible for individuals to embody multiple avatars simultaneously. However, existing studies have been limited to laboratory conditions and did not involve disabled participants. In this paper, we present a real-world implementation of a parallel control system allowing disabled workers in a café to embody multiple robotic avatars at the same time to carry out different tasks. Our data corpus comprises semi-structured interviews with workers, customer surveys, and videos of café operations. Results indicate that the system increases workers’ agency, enabling them to better manage customer journeys. Parallel embodiment and transitions between avatars create multiple interaction loops where the links between disabled workers and customers remain consistent, but the intermediary avatar changes. Based on our observations, we theorize that disabled individuals possess specific competencies that increase their ability to manage multiple avatar bodies.

    https://dl.acm.org/doi/10.1145/3544548.3581124

    Voir plus Voir moins
    6 min
activate_samplebutton_t1

Ce que les auditeurs disent de HCI Deep Dives

Moyenne des évaluations de clients

Évaluations – Cliquez sur les onglets pour changer la source des évaluations.