Research



  • Detect News Sympathy

    Twitter, sentiment analysis, crowdsourcing, machine learning

Measuring & Classifying News Media Sympathy

2017-2018

This paper investigates bias in coverage between Western and Arab media on Twitter after the November 2015 Beirut and Paris terror attacks. Using two Twitter datasets covering each attack, we investigate how Western and Arab media differed in coverage bias, sympathy bias, and resulting information propagation. We crowdsourced sympathy and sentiment labels for 2,390 tweets across four languages (English, Arabic, French, German), built a regression model to characterize sympathy, and thereafter trained a deep convolutional neural network to predict sympathy. Key findings show: (a) both events were disproportionately covered (b) Western media exhibited less sympathy, where each media coverage was more sympathetic towards the country affected in their respective region (c) Sympathy predictions supported ground truth analysis that Western media was less sympathetic than Arab media (d) Sympathetic tweets do not spread any further. We discuss our results in light of global news flow, Twitter affordances, and public perception impact.

  • El Ali, A., Stratmann, T., Park, S., Schöning, J., Heuten, W. & Boll, S. (2018). Measuring, Understanding, and Classifying News Media Sympathy on Twitter after Crisis Events. To be published in Proc. CHI '18. Montréal, Canada. pdf bib
    @inproceedings{Elali2018,
      title = {Measuring, Understanding, and Classifying News Media Sympathy on Twitter after Crisis Events},
      author = {Abdallah El Ali and Tim C Stratmann and Souneil Park and Johannes Sch{\"o}ning and Wilko Heuten and Susanne CJ Boll},
      booktitle = {Proceedings of the International Conference on Human Factors in Computing Systems 2018},
      series = {CHI '18},
      year = {2018},
      location = {Montreal, Canada},
      pages = {#-#},
      url = {https://doi.org/10.1145/3173574.3174130}
      }
       



  • Face2Emoji

    Face2Emoji, emoji, crowdsourcing, emotion recognition, facial expression, input, keyboard, text entry

Face2Emoji

2016-2017

One way to indicate nonverbal cues is by sending emoji (e.g., 😂 ), which requires users to make a selection from large lists. Given the growing number of emojis, this can incur user frustration, and instead we propose Face2Emoji, where we use a user's facial emotional expression to filter out the relevant set of emoji by emotion category. To validate our method, we crowdsourced 15,155 emoji to emotion labels across 308 website visitors, and found that our 202 tested emojis can indeed be classified into seven basic (including Neutral) emotion categories. To recognize facial emotional expressions, we use deep convolutional neural networks, where early experiments show an overall accuracy of 65% on the FER-2013 dataset. We discuss our future research on Face2Emoji, addressing how to improve our model performance, what type of usability test to run with users, and what measures best capture the usefulness and playfulness of our system.

  • El Ali, A., Wallbaum, T., Wasmann, M., Heuten, W. & Boll, S. (2017). Face2Emoji: Using Facial Emotional Expressions to Filter Emojis. In Proc. CHI '17 EA. Denver, CO, USA. pdf doi bib
    @inproceedings{ElAli:2017:FUF:3027063.3053086,
                 author = {El Ali, Abdallah and Wallbaum, Torben and Wasmann, Merlin and Heuten, Wilko and Boll, Susanne CJ},
                 title = {Face2Emoji: Using Facial Emotional Expressions to Filter Emojis},
                 booktitle = {Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems},
                 series = {CHI EA '17},
                 year = {2017},
                 isbn = {978-1-4503-4656-6},
                 location = {Denver, Colorado, USA},
                 pages = {1577--1584},
                 numpages = {8},
                 url = {http://doi.acm.org/10.1145/3027063.3053086},
                 doi = {10.1145/3027063.3053086},
                 acmid = {3053086},
                 publisher = {ACM},
                 address = {New York, NY, USA},
                 keywords = {crowdsourcing, emoji, emotion recognition, face2emoji, facial expression, input, keyboard, text entry},
               }
             



  • Wayfinding Strategies

    HCI4D, ICT4D, Lebanon, navigation, wayfinding, mapping services, giving directions, addressing, strategies

HCI4D Wayfinding Strategies

2015-2016

While HCI for development (HCI4D) research has typically focused on technological practices of poor and low-literate communities, little research has addressed how technology literate individuals living in a poor infrastructure environment use technology. Our work fills this gap by focusing on Lebanon, a country with longstanding political instability, and the wayfinding issues there stemming from missing street signs and names, a poor road infrastructure, and a non-standardized addressing system. We examine the relationship between technology literate individuals' navigation and direction giving strategies and their usage of current digital navigation aids. Drawing on an interview study (N=12) and a web survey (N=85), our findings show that while these individuals rely on mapping services and WhatsApp's share location feature to aid wayfinding, many technical and cultural problems persist that are currently resolved through social querying.

  • El Ali, A., Bachour, K., Heuten, W. & Boll, S. (2016). Technology Literacy in Poor Infrastructure Environments: Characterizing Wayfinding Strategies in Lebanon. In Proc. MobileHCI '16. Florence, Italy. pdf doi bib
    @inproceedings{ElAli:2016:TLP:2935334.2935352,
     author = {El Ali, Abdallah and Bachour, Khaled and Heuten, Wilko and Boll, Susanne},
     title = {Technology Literacy in Poor Infrastructure Environments: Characterizing Wayfinding Strategies in Lebanon},
     booktitle = {Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services},
     series = {MobileHCI '16},
     year = {2016},
     isbn = {978-1-4503-4408-1},
     location = {Florence, Italy},
     pages = {266--277},
     numpages = {12},
     url = {http://doi.acm.org/10.1145/2935334.2935352},
     doi = {10.1145/2935334.2935352},
     acmid = {2935352},
     publisher = {ACM},
     address = {New York, NY, USA},
     keywords = {HCI4D, ICT4D, Lebanon, addressing, giving directions, mapping services, mobile, navigation, strategies, wayfinding},
    }
       



  • VapeTracker

    E-cigarettes, vaping, tracking, sensors, behavior change technology

VapeTracker

2015-2016

Despite current controversy over e-cigarettes as a smoking cessation aid, we present early work based on a web survey (N=249) that shows that some e-cigarette users (46.2%) want to quit altogether, and that behavioral feedback that can be tracked can fulfill that purpose. Based on our survey findings, we designed VapeTracker, an early prototype that can attach to any e-cigarette device to track vaping activity. Currently, we are exploring how to improve our VapeTracker prototype using ambient feedback mechanisms, and how to account for behavior change models to support quitting e-cigarettes.

  • El Ali, A., Matviienko, A., Feld, Y., Heuten, W. & Boll, S. (2016). VapeTracker: Tracking Vapor Consumption to Help E-cigarette Users Quit. In Proc. CHI '16 EA. San Jose, CA, USA. pdf doi bib
    @inproceedings{ElAli:2016:VTV:2851581.2892318,
     author = {El Ali, Abdallah and Matviienko, Andrii and Feld, Yannick and Heuten, Wilko and Boll, Susanne},
     title = {VapeTracker: Tracking Vapor Consumption to Help E-cigarette Users Quit},
     booktitle = {Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems},
     series = {CHI EA '16},
     year = {2016},
     isbn = {978-1-4503-4082-3},
     location = {San Jose, California, USA},
     pages = {2049--2056},
     numpages = {8},
     url = {http://doi.acm.org/10.1145/2851581.2892318},
     doi = {10.1145/2851581.2892318},
     acmid = {2892318},
     publisher = {ACM},
     address = {New York, NY, USA},
     keywords = {behavior change technology, e-cigarettes, habits, health, prototype, sensors, tracking, vapetracker, vaping},
    }
       



  • MagiThings

    Security, usability, music composition, gaming, 3D gestures, magnets

MagiThings

2013-2014

As part of an internship at Telekom Innovation Labs (T-Labs) in Berlin, Germany, I designed and executed (under supervision of Dr. Hamed Ketabdar) 3 controlled user studies (under the MagiThings project) using the Around Device Interaction (ADI) paradigm to investigate a) the usability and security of magnet-based air signature authentication methods for usable and secure smartphone access b) playful music composition and gaming.

  • El Ali, A. & Ketabdar, H. (2015). Investigating Handedness in Air Signatures for Magnetic 3D Gestural User Authentication. In Proc. MobileHCI '15 Adjunct. Copenhagen, Denmark. pdf doi
  • El Ali, A. & Ketabdar, H. (2013). Magnet-based Around Device Interaction for Playful Music Composition and Gaming. To be published in International Journal of Mobile Human Computer Interaction (IJMHCI). Preprint available: pdf doi



  • CountMeIn

    Urban space, gaming, NFC, waiting time, touch interaction, design, development, Android

CountMeIn

2013

In this work, our focus was on improving the waiting time experience in public places (e.g., waiting for the train to come) by increasing collaboration and play amongst friends and strangers. We tested whether an NFC-enabled mobile pervasive game (in allowing physical interaction with a NFC tag display) reaps more social benefits than a touchscreen version.

  • Wolbert, M., El Ali, A. & Nack, F. (2014). CountMeIn: Evaluating Social Presence in a Collaborative Pervasive Mobile Game Using NFC and Touchscreen Interaction. In Proc. ACE '14. Madeira, Portugal. pdf doi
  • Wolbert, M. & El Ali, A. (2013). Evaluating NFC and Touchscreen Interactions in Collaborative Mobile Pervasive Games. In Proc. MobileHCI '13. Munich, Germany. pdf doi



  • Photographer Paths

    Social media mining, Flickr, scenic routes, maps, Amsterdam

Photographer Paths

2012-2013

I conceptualized, designed, evaluated and supervised the technical development of a route recommendation system that makes use of large amounts of geotagged image data (from Flickr) to compute sequence-based non-efficiency driven routes in the city of Amsterdam. The central premise is that pedestrians do not always want to get from point A to point B as quick as possible, but rather would like to explore hidden, more 'local' routes.

  • El Ali, A., van Sas, S. & Nack, F. (2013). Photographer Paths: Sequence Alignment of Geotagged Photos for Exploration-based Route Planning. In proceedings of the 16th ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW '13), 2013, San Antonio, Texas. pdf doi bib
    @inproceedings{ElAli:2013:PPS:2441776.2441888,
     author = {El Ali, Abdallah and van Sas, Sicco N.A. and Nack, Frank},
     title = {Photographer Paths: Sequence Alignment of Geotagged Photos for Exploration-based Route Planning},
     booktitle = {Proceedings of the 2013 Conference on Computer Supported Cooperative Work},
     series = {CSCW '13},
     year = {2013},
     isbn = {978-1-4503-1331-5},
     location = {San Antonio, Texas, USA},
     pages = {985--994},
     numpages = {10},
     url = {http://doi.acm.org/10.1145/2441776.2441888},
     doi = {10.1145/2441776.2441888},
     acmid = {2441888},
     publisher = {ACM},
     address = {New York, NY, USA},
     keywords = {exploration-based route planning, geotagged photos, sequence alignment, ugc, urban computing},
    }
       



  • 3D Gestures and Errors

    Errors, usability, user Experience, 3D gestures, lab study, gesture recognition

3D Gestures and Errors

2011-2012

As part of an internship at Nokia Research Center Tampere, I designed and executed (in collaboration with Nokia Research Center Espoo) a controlled study that investigated the effects of error on the usability and UX of device-based gesture interaction.

  • El Ali, A., Kildal, J. & Lantz, V. (2012). Fishing or a Z?: Investigating the Effects of Error on Mimetic and Alphabet Device-based Gesture Interaction. In Proceedings of the 14th international conference on Multimodal Interaction (ICMI '12), 2012, Santa Monica, California. [Best student paper award] pdf doi bib
    @inproceedings{ElAli:2012:FZI:2388676.2388701,
     author = {El Ali, Abdallah and Kildal, Johan and Lantz, Vuokko},
     title = {Fishing or a Z?: Investigating the Effects of Error on Mimetic and Alphabet Device-based Gesture Interaction},
     booktitle = {Proceedings of the 14th ACM International Conference on Multimodal Interaction},
     series = {ICMI '12},
     year = {2012},
     isbn = {978-1-4503-1467-1},
     location = {Santa Monica, California, USA},
     pages = {93--100},
     numpages = {8},
     url = {http://doi.acm.org/10.1145/2388676.2388701},
     doi = {10.1145/2388676.2388701},
     acmid = {2388701},
     publisher = {ACM},
     address = {New York, NY, USA},
     keywords = {alphabet gestures, device-based gesture interaction, errors, mimetic gestures, usability, workload},
    }
       



  • Graffiquity

    Location-based, multimedia messaging, urban space, user behavior, diary study, longitudinal

Graffiquity

2009-2010

As part of work under the MOCATOUR (Mobile Cultural Access for Tourists) project (part of Amsterdam Living Lab), I designed and executed a user study to investigate what factors are important when people create location-aware multimedia messages. Using the Graffiquity prototype as a probe, I ran a 2-week study using a paper-diary method to study this messaging behavior. This involved some Android interface development for the Graffiquity prototype, as well as designing low-fidelity diaries to gather longitudinal qualitative user data.

  • El Ali, A. , Nack, F. & Hardman, L. (2011). Good Times?! 3 Problems and Design Considerations for Playful HCI. In International Journal of Mobile Human Computer Interaction (IJMHCI), 3, 3, p.50-65. pdf doi
  • El Ali, A., Nack, F. & Hardman, L. (2010). Understanding contextual factors in location-aware multimedia messaging. In Proceedings of the 12th international conference on Multimodal Interfaces (ICMI-MLMI '10), 2010, Beijing, China. pdf doi