Effect of the Hand and Gaze Pointers in Remote Collaboration

被引:0
|
作者
Shin, Jihye [1 ]
Suh, Gayun [1 ]
Kim, Soo-Hyung [1 ]
Yang, Hyung-Jeong [1 ]
Shin, Ji-Eun [2 ]
Lee, Gun [3 ]
Kim, Seungwon [1 ]
机构
[1] Chonnam Natl Univ, Dept Artificial Intelligence Convergence, Gwangju 61186, South Korea
[2] Chonnam Natl Univ, Dept Psychol, Gwangju 61186, South Korea
[3] Univ South Australia, Adelaide, SA 5001, Australia
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Collaboration; Visualization; Three-dimensional displays; Visual communication; Aerospace electronics; Planning; Cameras; Virtual reality; Videoconferences; Thumb; Remote collaboration; gaze pointer; hand pointer; visual communication cue;
D O I
10.1109/ACCESS.2024.3500065
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Hand and gaze pointers are effective visual communication cues for remote collaboration. In this paper, we studied the effect of combining them when two basic visual cues (hand gestures and sketches) are available. We conducted a user study with 24 participants, performing two tasks (Tangram and Origami) under four experimental conditions with two independent variables (added hand and gaze pointers) when hand gesture and sketch cues were available as the baseline. The results demonstrated that the added hand and gaze pointer cues improved the co-presence and reduced the task load for remote experts. Additionally, the added gaze pointer cue was effective in fostering behavioral interdependence and message understanding. Participants most preferred the condition of combining all cues, followed by the condition with added gaze pointer, which resulted in quicker task completion and more accurate communication between collaborators. These findings suggest that adding hand and gaze pointer cues can significantly enhance the user experience in remote collaboration.
引用
收藏
页码:172774 / 172784
页数:11
相关论文
共 50 条
  • [31] ROWLEY HAND IN COLLABORATION
    DARBY, T
    LIBRARY, 1986, 8 (01): : 68 - 69
  • [32] Comparison of Gaze and Mouse Pointers for Video-based Collaborative Physical Task
    Akkil, Deepak
    Isokoski, Poika
    INTERACTING WITH COMPUTERS, 2018, 30 (06) : 524 - 542
  • [33] Effect of Full Body Avatar in Augmented Reality Remote Collaboration
    Wang, Tzu-Yang
    Sato, Yuji
    Otsuki, Mai
    Kuzuoka, Hideaki
    Suzuki, Yusuke
    2019 26TH IEEE CONFERENCE ON VIRTUAL REALITY AND 3D USER INTERFACES (VR), 2019, : 1221 - 1222
  • [34] Effect of View Sharing on Spatial Knowledge Acquisition in Remote Collaboration
    Wang, Tzu-Yang
    Kawaguchi, Ikkaku
    Kuzuoka, Hideaki
    Otsuki, Mai
    PROCEEDINGS OF THE 2022 ACM SYMPOSIUM ON SPATIAL USER INTERACTION, SUI 2022, 2022,
  • [35] REMOCOP: Remote collaboration platform for a next generation remote collaboration support system
    Shirai, Daisuke
    Mochida, Yasuhiro
    Fujii, Tatsuya
    NTT Technical Review, 2016, 14 (03):
  • [36] Deaf and Hearing Students' Eye Gaze Collaboration
    Kushalnagar, Raja S.
    Kushalnagar, Poorna
    Pelz, Jeffrey B.
    COMPUTERS HELPING PEOPLE WITH SPECIAL NEEDS, PT I, 2012, 7382 : 92 - 99
  • [37] Modelling and interpretation of gas detection using remote laser pointers
    Hodgkinson, J
    van Well, B
    Padgett, M
    Pride, RD
    SPECTROCHIMICA ACTA PART A-MOLECULAR AND BIOMOLECULAR SPECTROSCOPY, 2006, 63 (05) : 929 - 939
  • [38] eyemR-Vis: Using Bi-Directional Gaze Behavioural Cues to Improve Mixed Reality Remote Collaboration
    Jing, Allison
    May, Kieran William
    Naeem, Mahnoor
    Lee, Gun
    Billinghurst, Mark
    EXTENDED ABSTRACTS OF THE 2021 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI'21), 2021,
  • [39] MirrorTablet: Exploring a Low-Cost Mobile System for Capturing Unmediated Hand Gestures in Remote Collaboration
    Le, Khanh-Duy
    Zhu, Kening
    Fjeld, Morten
    16TH INTERNATIONAL CONFERENCE ON MOBILE AND UBIQUITOUS MULTIMEDIA (MUM 2017), 2017, : 79 - 89
  • [40] Hand Gestures and Visual Annotation in Live 360 Panorama-based Mixed Reality Remote Collaboration
    Teo, Theophilus
    Lee, Gun A.
    Billinghurst, Mark
    Adcock, Matt
    PROCEEDINGS OF THE 30TH AUSTRALIAN COMPUTER-HUMAN INTERACTION CONFERENCE (OZCHI 2018), 2018, : 406 - 410