A multi-modal context-aware sequence stage validation for human-centric AR assembly

被引:0
|
作者
Fang, Wei [1 ]
Zhang, Tienong [1 ]
Wang, Zeyu [1 ]
Ding, Ji [2 ]
机构
[1] Beijing Univ Posts & Telecommun, Sch Automat, Beijing, Peoples R China
[2] Beijing Aerosp Automat Control Inst, Beijing, Peoples R China
基金
北京市自然科学基金; 中国国家自然科学基金;
关键词
Augmented reality; Context; -aware; Assembly sequence validation; Multi -modal perception; Human -centric manufacturing; AUGMENTED REALITY SYSTEM;
D O I
10.1016/j.cie.2024.110355
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Augmented reality (AR) has demonstrated its superior performance to benefit manual assembly tasks by delivering intuitive guidance on the workbench directly, alleviating the mental load while leading to time-saving operations. Nevertheless, current AR-assisted assembly mainly focuses on superimposing visual instructions onto real scenarios and assumes that the worker performs correctly as instructed, ignoring the confirmation of the actual assembly execution processes, and it is still difficult to avoid operating errors on the shop floor. To this end, this paper aims to propose a multi-modal context-aware on-site assembly stage recognition for humancentric AR assembly. Firstly, a sim-real point cloud-based semantic understanding method for assembly stage identification is presented, which can recognize the current sequence stage during the AR assembly process even when encountering weakly textured workpieces. In addition, a 2D image-based semantic recognition for on-site images is applied as compensation from the RGBD camera, resulting in a robust multi-modal context-aware assembly stage validation for the ongoing AR assembly tasks. Followed by a context-aware closed-loop AR assembly system that provides actual assembly result confirmation automatically, relieving the mental load for workers to activate the next assembly instruction, as well as confirm the current status during the actual operation. Finally, extensive experiments are carried out, and the results illustrate that the proposed contextaware AR assembly system can monitor the on-site sequence stage while providing a human-centric AR assembly action.
引用
收藏
页数:18
相关论文
共 50 条
  • [1] Things that see: Context-aware multi-modal interaction
    Crowley, James L.
    [J]. COGNITIVE VISION SYSTEMS: SAMPLING THE SPECTRUM OF APPROACHERS, 2006, 3948 : 183 - 198
  • [2] CONTEXT-AWARE DEEP LEARNING FOR MULTI-MODAL DEPRESSION DETECTION
    Lam, Genevieve
    Huang Dongyan
    Lin, Weisi
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 3946 - 3950
  • [3] Multi-Modal Context-Aware reasoNer (CAN) at the Edge of IoT
    Rahman, Hasibur
    Rahmani, Rahim
    Kanter, Theo
    [J]. 8TH INTERNATIONAL CONFERENCE ON AMBIENT SYSTEMS, NETWORKS AND TECHNOLOGIES (ANT-2017) AND THE 7TH INTERNATIONAL CONFERENCE ON SUSTAINABLE ENERGY INFORMATION TECHNOLOGY (SEIT 2017), 2017, 109 : 335 - 342
  • [4] SCATEAgent: Context-aware software agents for multi-modal travel
    Yin, M
    Griss, M
    [J]. APPLICATIONS OF AGENT TECHNOLOGY IN TRAFFIC AND TRANSPORTATION, 2005, : 69 - 84
  • [5] Adaptive Context-Aware Multi-Modal Network for Depth Completion
    Zhao, Shanshan
    Gong, Mingming
    Fu, Huan
    Tao, Dacheng
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 5264 - 5276
  • [6] Experiments with multi-modal interfaces in a context-aware city guide
    Bornträger, C
    Cheverst, K
    Davies, N
    Dix, A
    Friday, A
    Seitz, J
    [J]. HUMAN-COMPUTER INTERACTION WITH MOBILE DEVICES AND SERVICES, 2003, 2795 : 116 - 130
  • [7] Versatile Multi-Modal Pre-Training for Human-Centric Perception
    Hong, Fangzhou
    Pan, Liang
    Cai, Zhongang
    Liu, Ziwei
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 16135 - 16145
  • [8] Context-Aware Decision Information Packages: An Approach to Human-Centric Smart Factories
    Hoos, Eva
    Hirmer, Pascal
    Mitschang, Bernhard
    [J]. ADVANCES IN DATABASES AND INFORMATION SYSTEMS, ADBIS 2017, 2017, 10509 : 42 - 56
  • [9] Towards Distributed and Context-Aware Human-Centric Cyber-Physical Systems
    Garcia-Alonso, Jose
    Berrocal, Javier
    Canal, Carlos
    Murillo, Juan M.
    [J]. ADVANCES IN SERVICE-ORIENTED AND CLOUD COMPUTING (ESOCC 2016), 2018, 707 : 59 - 73
  • [10] A human-centric framework for context-aware flowable services in cloud computing environments
    Zhu, Yishui
    Shtykh, Roman Y.
    Jin, Qun
    [J]. INFORMATION SCIENCES, 2014, 257 : 231 - 247