Research



  • AI Disclosures

    Transaprent AI disclosures, Article 52, obligations, trustworthiness, qualitative, law

Transparent AI Disclosure Obligations: Who, What, When, Where, Why, How

2023-2024

Advances in Generative Artificial Intelligence (AI) are resulting in AI-generated media output that is (nearly) indistinguishable from human-created content. This can drastically impact users and the media sector, especially given global risks of misinformation. While the currently discussed European AI Act aims at addressing these risks through Article 52's AI transparency obligations, its interpretation and implications remain unclear. In this early work, we adopt a participatory AI approach to derive key questions based on Article 52's disclosure obligations. We ran two workshops with researchers, designers, and engineers across disciplines (N=16), where participants deconstructed Article 52's relevant clauses using the 5W1H framework. We contribute a set of 149 questions clustered into five themes and 18 sub-themes. We believe these can not only help inform future legal developments and interpretations of Article 52, but also provide a starting point for Human-Computer Interaction research to (re-)examine disclosure transparency from a human-centered AI lens.

[show publications]



  • GenAI for UXD

    Generative AI, User Experience Design, user perceptions, interviews

Understanding Generative AI for User Experience Design

2023-2024

Among creative professionals, Generative Artificial Intelligence (GenAI) has sparked excitement over its capabilities and fear over unanticipated consequences. How does GenAI impact User Experience Design (UXD) practice, and are fears warranted? We interviewed 20 UX Designers, with diverse experience and across companies (startups to large enterprises). We probed them to characterize their practices, and sample their attitudes, concerns, and expectations. We found that experienced designers are confident in their originality, creativity, and empathic skills, and find GenAI's role as assistive. They emphasized the unique human factors of "enjoyment" and "agency", where humans remain the arbiters of "AI alignment". However, skill degradation, job replacement, and creativity exhaustion can adversely impact junior designers. We discuss implications for human-GenAI collaboration, specifically copyright and ownership, human creativity and agency, and AI literacy and access. Through the lens of responsible and participatory AI, we contribute a deeper understanding of GenAI fears and opportunities for UXD.

[show publications]



  • ShareYourReality

    Avatar co-embodiment, perceptual crossing, haptics, agency

Haptics in Virtual Avatar Co-embodiment

2023-2024

Virtual co-embodiment enables two users to share a single avatar in Virtual Reality (VR). During such experiences, the illusion of shared motion control can break during joint-action activities, highlighting the need for position-aware feedback mechanisms. Drawing on the perceptual crossing paradigm, we explore how haptics can enable non-verbal coordination between co-embodied participants. In a within-subjects study (20 participant pairs), we examined the effects of vibrotactile haptic feedback (None, Present) and avatar control distribution (25-75%, 50-50%, 75-25%) across two VR reaching tasks (Targeted, Free-choice) on participants’ Sense of Agency (SoA), co-presence, body ownership, and motion synchrony. We found (a) lower SoA in the free-choice with haptics than without, (b) higher SoA during the shared targeted task, (c) co-presence and body ownership were significantly higher in the free-choice task, (d) players’ hand motions synchronized more in the targeted task. We provide cautionary considerations when including haptic feedback mechanisms for avatar co-embodiment experiences.

[show publications]



  • CardioceptionVR

    Interoception, cardiac, virtual reality, heart rate, interoceptive accuracy

Cardiac Interoception in Virtual Reality

2023

Measuring interoception (‘perceiving internal bodily states’) has diagnostic and wellbeing implications. Since heartbeats are distinct and frequent, various methods aim at measuring cardiac interoceptive accuracy (CIAcc). However, the role of exteroceptive modalities for representing heart rate (HR) across screen-based and Virtual Reality (VR) environments remains unclear. Using a PolarH10 HR monitor, we develop a modality-dependent cardiac recognition task that modifies displayed HR. In a mixed-factorial design (N=50), we investigate how task environment (Screen, VR), modality (Audio, Visual, Audio-Visual), and real-time HR modifications (±15%, ±30%, None) influence CIAcc, interoceptive awareness, mind-body measures, VR presence, and post-experience responses. Findings showed that participants confused their HR with underestimates up to 30%; environment did not affect CIAcc but influenced mind-related measures; modality did not influence CIAcc, however including audio increased interoceptive awareness; and VR presence inversely correlated with CIAcc. We contribute a lightweight and extensible cardiac interoception measurement method, and implications for biofeedback displays.

[show publications]



  • BreatheWithMe

    Breathing, visualization, vibrotactile, social breath awareness, collaborative tasks, synchrony

BreatheWithMe: Multimodal, Social Breath Awareness

2022-2023

Sharing breathing signals has the capacity to provide insights into hidden experiences and enhance interpersonal communication. However, it remains unclear how the modality of breath signals (visual, haptic) is socially interpreted during collaborative tasks. In this mixed-methods study, we design and evaluate BreatheWithMe, a prototype for real-time sharing and receiving of breathing signals through visual, vibrotactile, or visual-vibrotactile modalities. In a within-subjects study (15 pairs), we investigated the effects of modality on breathing synchrony, social presence, and overall user experience. Key findings showed: (a) there were no significant effects of visualization modality on breathing synchrony, only on deliberate music-driven synchronization; (b) visual modality was preferred over vibrotactile feedback, despite no differences across social presence dimensions; (c) BreatheWithMe was perceived to be an insightful window into others, however included data exposure and social acceptability concerns. We contribute insights into the design of multi-modal real-time breathing visualization systems for colocated, collaborative tasks.

[show publications]



  • FeelTheNews

    Affective haptics, thermal, vibrotactile, news, video

FeelTheNews: Affective Haptics for News Videos

2022-2023

Emotion plays a key role in the emerging wave of immersive, multi-sensory audience news engagement experiences. Since emotions can be triggered by somatosensory feedback, in this work we explore how augmenting news video watching with haptics can influence affective perceptions of news. Using a mixed-methods approach, we design and evaluate FeelTheNews, a prototype that combines vibrotactile and thermal stimulation (Matching, 70Hz/20°C, 200Hz/40°C) during news video watching. In a within-subjects study (N=20), we investigate the effects of haptic stimulation and video valence on perceived valence, emotion intensity, comfort, and overall haptic experiences. Findings showed: (a) news valence and emotion intensity ratings were not affected by haptics, (b) no stimulation was more comfortable than including stimulation, (c) attention and engagement with the news can override haptic sensations, and (d) users' perceived agency over their reactions is critical to avoid distrust. We contribute cautionary insights for haptic augmentation of the news watching experience.

[show publications]



  • Avatar Biosignals

    Biosignals, social VR, visualization, entertainment, perception

Avatar Biosignal Visualizations in Social Virtual Reality

2021-2022

Visualizing biosignals can be important for social Virtual Reality (VR), where avatar non-verbal cues are missing. While several biosignal representations exist, designing effective visualizations and understanding user perceptions within social VR entertainment remains unclear. We adopt a mixed-methods approach to design biosignals for social VR entertainment. Using survey (N=54), context-mapping (N=6), and co-design (N=6) methods, we derive four visualizations. We then ran a within-subjects study (N=32) in a virtual jazz-bar to investigate how heart rate (HR) and breathing rate (BR) visualizations, and signal rate, influence perceived avatar arousal, user distraction, and preferences. Findings show that skeuomorphic visualizations for both biosignals allow differentiable arousal inference; skeuomorphic and particles were least distracting for HR, whereas all were similarly distracting for BR; biosignal perceptions often depend on avatar relations, entertainment type, and emotion inference of avatars versus spaces. We contribute HR and BR visualizations, and considerations for designing social VR entertainment biosignal visualizations.

[show publications]



  • RCEA-360VR

    Emotion, annotation, 360° video, virtual reality, ground truth, labels, viewport-dependent, real-time, continuous

Real-time, Continuous Emotion Annotation for 360° VR Videos

2019-2021

Precise emotion ground truth labels for 360° virtual reality (VR) video watching are essential for fine-grained predictions under varying viewing behavior. However, current annotation techniques either rely on post-stimulus discrete self-reports, or real-time, con- tinuous emotion annotations (RCEA) but only for desktop/mobile settings. We present RCEA for 360° VR videos (RCEA-360VR), where we evaluate in a controlled study (N=32) the usability of two peripheral visualization techniques: HaloLight and DotSize. We furthermore develop a method that considers head movements when fusing labels. Using physiological, behavioral, and subjective measures, we show that (1) both techniques do not increase users’ workload, sickness, nor break presence (2) our continuous valence and arousal annotations are consistent with discrete within-VR and original stimuli ratings (3) users exhibit high similarity in viewing behavior, where fused ratings perfectly align with intended labels. Our work contributes usable and effective techniques for collecting fine-grained viewport-dependent emotion labels in 360° VR.

[show publications]



  • CorrNet

    Emotion recognition, video, physiological signals, machine learning

Fine-Grained Emotion Recognition with Physio Signals

2019-2022

Recognizing user emotions while they watch short-form videos anytime and anywhere is essential for facilitating video content customization and personalization. However, most works either classify a single emotion per video stimuli, or are restricted to static, desktop environments. To address this, we propose a correlation-based emotion recognition algorithm (CorrNet) to recognize the valence and arousal (V-A) of each instance (fine-grained segment of signals) using only wearable, physiological signals (e.g., electrodermal activity, heart rate). CorrNet takes advantage of features both inside each instance (intra-modality features) and between different instances for the same video stimuli (correlation-based features). We first test our approach on an indoor-desktop affect dataset (CASE), and thereafter on an outdoor-mobile affect dataset (MERCA) which we collected using a smart wristband and wearable eyetracker. Results show that for subject-independent binary classifi- cation (high-low), CorrNet yields promising recognition accuracies: 76.37% and 74.03% for V-A on CASE, and 70.29% and 68.15% for V-A on MERCA. Our findings show: (1) instance segment lengths between 1–4 s result in highest recognition accuracies (2) accuracies between laboratory-grade and wearable sensors are comparable, even under low sampling rates (≤64 Hz) (3) large amounts of neu- tral V-A labels, an artifact of continuous affect annotation, result in varied recognition performance.

[show publications]



  • ThermalWear

    Thermal, affect, emotion, voice, prosody, wearable, display, chest

ThermalWear: Augmenting Voice Messages with Affect

2019-2020

Voice is a rich modality for conveying emotions, however emotional prosody production can be situationally or medically impaired. Since thermal displays have been shown to evoke emotions, we explore how thermal stimulation can augment perception of neutrally-spoken voice messages with affect. We designed ThermalWear, a wearable on-chest thermal display, then tested in a controlled study (N=12) the effects of fabric, thermal intensity, and direction of change. Thereafter, we synthesized 12 neutrally-spoken voice messages, validated (N=7) them, then tested (N=12) if thermal stimuli can augment their perception with affect. We found warm and cool stimuli (a) can be perceived on the chest, and quickly without fabric (4.7-5s) (b) do not incur discomfort (c) generally increase arousal of voice messages and (d) increase / decrease message valence, respectively. We discuss how thermal displays can augment voice perception, which can enhance voice assistants and support individuals with emotional prosody impairments.

[show publications]



  • RCEA

    Emotion; annotation; mobile; video; real-time; continuous; labels

Real-time, Continuous Emotion Annotation for Mobile Video

2019-2020

Collecting accurate and precise emotion ground truth labels for mobile video watching is essential for ensuring meaningful predictions. However, video-based emotion annotation techniques either rely on post-stimulus discrete self-reports, or allow real-time, continuous emotion annotations (RCEA) only for desktop settings. Following a user-centric approach, we designed an RCEA technique for mobile video watching, and validated its usability and reliability in a controlled, indoor (N=12) and later outdoor (N=20) study. Drawing on physiological measures, interaction logs, and subjective workload reports, we show that (1) RCEA is perceived to be usable for annotating emotions while mobile video watching, without increasing users’ mental workload (2) the resulting time-variant annotations are comparable with intended emotion attributes of the video stimuli (classification error for valence: 8.3%; arousal: 25%). We contribute a validated annotation technique and associated annotation fusion method, that is suitable for collecting fine-grained emotion annotations while users watch mobile videos.

[show publications]



  • Measure Social VR

    Social virtual reality, photo sharing, questionnaire, immersion, presence

Measuring Experiences in Social VR

2018-2019

Millions of photos are shared online daily, but the richness of interaction compared with face-to-face (F2F) sharing is still missing. While this may change with social Virtual Reality (socialVR), we still lack tools to measure such immersive and interactive experiences. In this paper, we investigate photo sharing experiences in immersive environments, focusing on socialVR. Running context mapping (N=10), an expert creative session (N=6), and an online experience clustering questionnaire (N=20), we develop and statistically evaluate a questionnaire to measure photo sharing experiences. We then ran a controlled, within-subject study (N=26 pairs) to compare photo sharing under F2F, Skype, and Facebook Spaces. Using interviews, audio analysis, and our question- naire, we found that socialVR can closely approximate F2F sharing. We contribute empirical findings on the immersiveness differences between digital communication media, and propose a socialVR questionnaire that can in the future generalize beyond photo sharing.

[show publications]



  • DeepSleep

    Ballistocardiography, deep learning, sleep classification

DeepSleep: Classifying Sleep Stages

2018-2019

Current techniques for tracking sleep are either obtrusive (Polysomnography) or low in accuracy (wearables). In this early work, we model a sleep classification system using an unobtrusive Ballistocardiographic (BCG)-based heart sensor signal collected from a commercially available pressure- sensitive sensor sheet. We present DeepSleep, a hybrid deep neural network architecture comprising of CNN and LSTM layers. We further employed a 2-phase training strategy to build a pre-trained model and to tackle the limited dataset size. Our model results in a classification accuracy of 74%, 82%, 77% and 63% using Dozee BCG, MIT-BIH’s ECG, Dozee’s ECG and Fitbit’s PPG datasets, respectively. Furthermore, our model shows a positive correlation (r = 0.43) with the SATED perceived sleep quality scores. We show that BCG signals are effective for long-term sleep monitoring, but currently not suitable for medical diagnostic purposes.

[show publications]



  • Detect News Sympathy

    Twitter, sentiment analysis, crowdsourcing, machine learning

Measuring & Classifying News Media Sympathy

2017-2018

This paper investigates bias in coverage between Western and Arab media on Twitter after the November 2015 Beirut and Paris terror attacks. Using two Twitter datasets covering each attack, we investigate how Western and Arab media differed in coverage bias, sympathy bias, and resulting information propagation. We crowdsourced sympathy and sentiment labels for 2,390 tweets across four languages (English, Arabic, French, German), built a regression model to characterize sympathy, and thereafter trained a deep convolutional neural network to predict sympathy. Key findings show: (a) both events were disproportionately covered (b) Western media exhibited less sympathy, where each media coverage was more sympathetic towards the country affected in their respective region (c) Sympathy predictions supported ground truth analysis that Western media was less sympathetic than Arab media (d) Sympathetic tweets do not spread any further. We discuss our results in light of global news flow, Twitter affordances, and public perception impact.

[show publications]



  • Face2Emoji

    Face2Emoji, emoji, crowdsourcing, emotion recognition, facial expression, input, keyboard, text entry

Face2Emoji

2016-2017

One way to indicate nonverbal cues is by sending emoji (e.g., 😂 ), which requires users to make a selection from large lists. Given the growing number of emojis, this can incur user frustration, and instead we propose Face2Emoji, where we use a user's facial emotional expression to filter out the relevant set of emoji by emotion category. To validate our method, we crowdsourced 15,155 emoji to emotion labels across 308 website visitors, and found that our 202 tested emojis can indeed be classified into seven basic (including Neutral) emotion categories. To recognize facial emotional expressions, we use deep convolutional neural networks, where early experiments show an overall accuracy of 65% on the FER-2013 dataset. We discuss our future research on Face2Emoji, addressing how to improve our model performance, what type of usability test to run with users, and what measures best capture the usefulness and playfulness of our system.

[show publications]



  • Wayfinding Strategies

    HCI4D, ICT4D, Lebanon, navigation, wayfinding, mapping services, giving directions, addressing, strategies

HCI4D Wayfinding Strategies

2015-2016

While HCI for development (HCI4D) research has typically focused on technological practices of poor and low-literate communities, little research has addressed how technology literate individuals living in a poor infrastructure environment use technology. Our work fills this gap by focusing on Lebanon, a country with longstanding political instability, and the wayfinding issues there stemming from missing street signs and names, a poor road infrastructure, and a non-standardized addressing system. We examine the relationship between technology literate individuals' navigation and direction giving strategies and their usage of current digital navigation aids. Drawing on an interview study (N=12) and a web survey (N=85), our findings show that while these individuals rely on mapping services and WhatsApp's share location feature to aid wayfinding, many technical and cultural problems persist that are currently resolved through social querying.

[show publications]



  • VapeTracker

    E-cigarettes, vaping, tracking, sensors, behavior change technology

VapeTracker

2015-2016

Despite current controversy over e-cigarettes as a smoking cessation aid, we present early work based on a web survey (N=249) that shows that some e-cigarette users (46.2%) want to quit altogether, and that behavioral feedback that can be tracked can fulfill that purpose. Based on our survey findings, we designed VapeTracker, an early prototype that can attach to any e-cigarette device to track vaping activity. Currently, we are exploring how to improve our VapeTracker prototype using ambient feedback mechanisms, and how to account for behavior change models to support quitting e-cigarettes.

[show publications]



  • MagiThings

    Security, usability, music composition, gaming, 3D gestures, magnets

MagiThings

2013-2014

As part of an internship at Telekom Innovation Labs (T-Labs) in Berlin, Germany, I designed and executed (under supervision of Dr. Hamed Ketabdar) 3 controlled user studies (under the MagiThings project) using the Around Device Interaction (ADI) paradigm to investigate a) the usability and security of magnet-based air signature authentication methods for usable and secure smartphone access b) playful music composition and gaming.

[show publications]



  • CountMeIn

    Urban space, gaming, NFC, waiting time, touch interaction, design, development, Android

CountMeIn

2013

In this work, our focus was on improving the waiting time experience in public places (e.g., waiting for the train to come) by increasing collaboration and play amongst friends and strangers. We tested whether an NFC-enabled mobile pervasive game (in allowing physical interaction with a NFC tag display) reaps more social benefits than a touchscreen version.

[show publications]



  • Photographer Paths

    Social media mining, Flickr, scenic routes, maps, Amsterdam

Photographer Paths

2012-2013

I conceptualized, designed, evaluated and supervised the technical development of a route recommendation system that makes use of large amounts of geotagged image data (from Flickr) to compute sequence-based non-efficiency driven routes in the city of Amsterdam. The central premise is that pedestrians do not always want to get from point A to point B as quick as possible, but rather would like to explore hidden, more 'local' routes.

[show publications]



  • 3D Gestures and Errors

    Errors, usability, user Experience, 3D gestures, lab study, gesture recognition

3D Gestures and Errors

2011-2012

As part of an internship at Nokia Research Center Tampere, I designed and executed (in collaboration with Nokia Research Center Espoo) a controlled study that investigated the effects of error on the usability and UX of device-based gesture interaction.

[show publications]



  • Graffiquity

    Location-based, multimedia messaging, urban space, user behavior, diary study, longitudinal

Graffiquity

2009-2010

As part of work under the MOCATOUR (Mobile Cultural Access for Tourists) project (part of Amsterdam Living Lab), I designed and executed a user study to investigate what factors are important when people create location-aware multimedia messages. Using the Graffiquity prototype as a probe, I ran a 2-week study using a paper-diary method to study this messaging behavior. This involved some Android interface development for the Graffiquity prototype, as well as designing low-fidelity diaries to gather longitudinal qualitative user data.

[show publications]