GazeLens: Guiding Attention to Improve Gaze Interpretation in Hub-Satellite Collaboration
Paper i proceeding, 2019

In hub-satellite collaboration using video, interpreting gaze direction is critical for communication between hub coworkers sitting around a table and their remote satellite colleague. However, 2D video distorts images and makes this interpretation inaccurate. We present GazeLens, a video conferencing system that improves hub coworkers’ ability to interpret the satellite worker’s gaze. A 360∘ camera captures the hub coworkers and a ceiling camera captures artifacts on the hub table. The system combines these two video feeds in an interface. Lens widgets strategically guide the satellite worker’s attention toward specific areas of her/his screen allow hub coworkers to clearly interpret her/his gaze direction. Our evaluation shows that GazeLens (1) increases hub coworkers’ overall gaze interpretation accuracy by 25.8% in comparison to a conventional video conferencing system, (2) especially for physical artifacts on the hub table, and (3) improves hub coworkers’ ability to distinguish between gazes toward people and artifacts. We discuss how screen space can be leveraged to improve gaze interpretation.

lens widgets

gaze

remote collaboration

telepresence

Författare

Khanh Duy Le

Chalmers, Data- och informationsteknik, Interaktionsdesign

Ignacio Avellino

Sorbonne Université

Cédric Fleury

Université Paris-Sud

Morten Fjeld

Chalmers, Data- och informationsteknik, Interaktionsdesign

Andreas Kunz

Eidgenössische Technische Hochschule Zürich (ETH)

Human-Computer Interaction

03029743 (ISSN) 16113349 (eISSN)

Vol. 11747 LNCS 282-303
978-3-030-29383-3 (ISBN)

17th IFIP TC 13 International Conference on Human-Computer Interaction
Paphos, Cyprus,

Ämneskategorier

Människa-datorinteraktion (interaktionsdesign)

DOI

10.1007/978-3-030-29384-0_18

Mer information

Senast uppdaterat

2024-10-07