Augmenting Social Situations and Democratizing Tools

Eyewear 2021: The Forth Workshop on Eyewear Computing

25th September 2021, before UbiComp/ISWC

Paper Deadline: 30th June 2021

Call for participation


Head-worn sensing, especially embedded in augmented and virtual reality (AR/VR) head-mounted displays and smart glasses is currently increasingly moving away from niche applications and small-scale research prototypes to large-scale consumer adoption (e.g. Oculus Quest 2, Hololens 2, J!NS MEME, Bose Frames).

Significant progress in sensing technologies and modalities have lead to a constant increase of commercially available products and unobtrusive, affordable research prototypes. These recent advances allow us to extend the Eyewear Community to enable large scale in-situ studies, as one of the favored research methodologies in Ubiquitous Computing. One manifestation of this can already be observed in large scale dataset recording, eyewear student competitions and programming seminars.

In this workshop we focus on supporting these large-scale uses of eyewear computing, discussing lessons learned from early deployment and how to empower the community with better hardware/software prototyping tools as well as the establishment of open data sets.

In addition, we will discuss long-term psychological and physical impacts and risks of the technology that become increasingly important with a wider distribution of devices to consumers.

The proposed workshop will bring together researchers from a wide range of disciplines, such as mobile and ubiquitous computing, activity recognition eye tracking, optics, human vision and perception, usability, as well as systems engineering research. This year it will also bring in researchers from neuroscience, psychology and other fields that might want to apply or use the research systems. The workshop is a continuation from 2016/2018/2019 and will focus on discussing on how to democratizing tools for researchers who want to apply eyewear computing (sensing/interaction) in their fields, yet are no wearable computing experts or computer scientists.


Keynote13:30 - 14:30 UTC

Keynote will be given by Prof. Thad Starner (Georgia Tech).

Session One: Tecnhology14:30 - 15:30 UTC

Project Ariel: An Open Source Augmented Reality Headset for Industrial Applications, James O Campbell, Vincent Ta, Alvaro Cassinelli, Damien Rompapas

A Sub-Milliwatt and Sub-Millisecond 3-D Gaze Estimator for Ultra Low-Power AR Applications, Sungmin Moon, Chao Zhang, Sooill Park, Hui Zhang, Woo-Shik Kim, Jong Hwan Ko

Prototyping Smart Eyewear with Capacitive Sensing for Facial and Head Gesture Detection Denys J.C. Matthies, Alex Woodall, Bodo Urban 10 min discussion

Showcase15:30 - 16:00 UTC

We encourage participants to demonstrate research prototypes related to their submissions.

Session Two: Applications16:00 - 17:00 UTC

The Predictive Power of Eye-Tracking Data in an Interactive AR Learning Environment Dr David Dzsotjan, Kim Ludwig-Petsch, Sergey Mukhametov,Shoya Ishimaru, Stefan Kuechemann, Jochen Kuhn

Using Smart Eyewear to Sense Electrodermal Activity While Reading Christopher Changmok Kim, Jiawen Han, Dingding Zheng, George Chernyshov, Kai Kunze

Effects of Counting Seconds in the Mind while Reading Pramod Vadiraja, Jayasankar Santhosh, Hanane Moulay, Andreas Dengel, Shoya Ishimaru

Closing17:30 - 18:00 UTC

We will summarize the key experiences from the workshop and will plan follow up activities.


PDT (US) EDT (US) CEST (Europe) JST (Japan) AEST (Australia)
Keynote 06:30 09:30 15:30 22:30 23:30
07:00 10:00 16:00 23:00 00:00
Session One: Technology 07:30 10:30 16:30 23:30 00:30
08:00 11:00 17:00 00:00 01:00
Showcase 08:30 11:30 17:30 00:30 01:30
09:00 12:00 18:00 01:00 02:00
Session Two: Applications 9:30 12:30 18:30 1:30 2:30
10:00 13:00 19:00 02:00 03:00
Closing 10:30 13:30 19:30 02:30 03:30


Kirill Ragozin is a postdoctoral researcher at Keio University Graduate School of Media Design. His major research contributions are in mixed reality and embodied thermal interactions. His research interests include immersive digital media, eye tracking and interaction design.

Kai Kunze is a Professor at Keio University Graduate School of Media Design. His major research contributions are in pervasive computing with a focus on augmenting human abilities.

Teresa Hirzle is a fourth-year Ph.D. student at Ulm University. Her research interests lie in analysing the impact of head-worn technology (in particular VR) on user comfort and developing suitable measurement tools thereof.

Benjamin Tag is a postdoctoral researcher and associate lecturer at the School of Computing and Information Systems at the University of Melbourne. His research focuses on the conceptualisation of digital emotion regulation, and investigation of human cognition using biometric sensors and psychological test methods.

Yuji Uema is a researcher at JINS Inc., where he develops smart eyewear and conducts feasibility studies with special focus on HCI, education and medical application. His Ph.D. research at The University of Tokyo includes the analysis and estimation of cognitive load based on eye blinks and eye movement.

Enrico Rukzio is a Professor of Media Informatics at Ulm University. His research focuses on mobile and wearable interaction, computerized eyewear and automotive user interfaces.

Jamie A Ward is a lecturer at Goldsmiths, University of London. He works on wearable computing, with contributions to topics like social neuroscience, activity recognition, performance evaluation, and applications to real-world problems in health, industry, and the arts.

Call for participation

Paper Deadline 30th June 2021 over the Precision conference Submissions Website Select SIGCHI, UbiComp/ISWC 2021 and the Eyewear Computing Workshop.

Submissions should use the ACM ‘sigconf' template ‘sigconf' template and should not be longer than 8 pages including references. All submissions will be peer-reviewed by a program committee.

The human face, holding the majority of human senses, provides versatile information about a person's cognitive and affective states. Using head-worn technology, user states, such as reading, walking, detection of fatigue or cognitive load, can be recognized and enable new application scenarios, such as quantified self for the mind. Besides, significant progress in sensing technologies and modalities have led to a constant increase in unobtrusive and affordable head-worn sensing devices, such as smart glasses like Google Glass or J!NS meme. With the resulting increasing ubiquity of the technology, new opportunities arise for applications that track social behaviours and interactions between groups of people in real-world settings.

This workshop aims to identify key factors in large-scale uses of eyewear computing. More precisely, we are going to summarize lessons learned from early deployment, focus on ways to empower the community with high-quality hardware and software prototyping tools, and will specifically discuss the establishment of open source datasets. With the wider distribution of head-worn sensing technology to the public, long-term impacts of the technology become increasingly important. Therefore, we also welcome topics that are concerned with physical or psychological aspects of head-worn sensing devices. We invite submissions of position papers (2-4 pages in the ACM sigconf format, excluding references) that cover topics such as, but are not limited to:

  • Open Eyewear Tools and Datasets
  • Eyewear sensing and actuation technologies
  • Smart Eyewear interactions
  • Application scenarios of head-worn sensing/interaction devices
  • Impact and Risks of long-term sensing
  • Smart Eyewear User Experience Designs

Submissions will go through a single-phase review process with at least 2 reviewers. They will be assessed based on their relevance, originality, and their potential of initiating a fruitful discussion at the workshop. Note, that position papers are not expected to present finished research projects. We rather ask for thought-provoking ideas or initial explorations of a topic. Position papers will be reviewed by two of the workshop organizers. At the workshop, accepted submissions will be presented in a 5-min prerecorded video, following the Pecha Kucha style. At least one author of an accepted submission must attend the workshop.