A multi-modal context-aware sequence stage validation for human-centric AR assembly

被引:0
|
作者
Fang, Wei [1 ]
Zhang, Tienong [1 ]
Wang, Zeyu [1 ]
Ding, Ji [2 ]
机构
[1] Beijing Univ Posts & Telecommun, Sch Automat, Beijing, Peoples R China
[2] Beijing Aerosp Automat Control Inst, Beijing, Peoples R China
基金
中国国家自然科学基金; 北京市自然科学基金;
关键词
Augmented reality; Context; -aware; Assembly sequence validation; Multi -modal perception; Human -centric manufacturing; AUGMENTED REALITY SYSTEM;
D O I
10.1016/j.cie.2024.110355
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Augmented reality (AR) has demonstrated its superior performance to benefit manual assembly tasks by delivering intuitive guidance on the workbench directly, alleviating the mental load while leading to time-saving operations. Nevertheless, current AR-assisted assembly mainly focuses on superimposing visual instructions onto real scenarios and assumes that the worker performs correctly as instructed, ignoring the confirmation of the actual assembly execution processes, and it is still difficult to avoid operating errors on the shop floor. To this end, this paper aims to propose a multi-modal context-aware on-site assembly stage recognition for humancentric AR assembly. Firstly, a sim-real point cloud-based semantic understanding method for assembly stage identification is presented, which can recognize the current sequence stage during the AR assembly process even when encountering weakly textured workpieces. In addition, a 2D image-based semantic recognition for on-site images is applied as compensation from the RGBD camera, resulting in a robust multi-modal context-aware assembly stage validation for the ongoing AR assembly tasks. Followed by a context-aware closed-loop AR assembly system that provides actual assembly result confirmation automatically, relieving the mental load for workers to activate the next assembly instruction, as well as confirm the current status during the actual operation. Finally, extensive experiments are carried out, and the results illustrate that the proposed contextaware AR assembly system can monitor the on-site sequence stage while providing a human-centric AR assembly action.
引用
收藏
页数:18
相关论文
共 50 条
  • [41] MISSRec: Pre-training and Transferring Multi-modal Interest-aware Sequence Representation for Recommendation
    Wang, Jinpeng
    Zeng, Ziyun
    Wang, Yunxiao
    Wang, Yuting
    Lu, Xingyu
    Li, Tianxiang
    Yuan, Jun
    Zhang, Rui
    Zheng, Hai-Tao
    Xia, Shu-Tao
    [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 6548 - 6557
  • [42] Multi-Modal Human-Aware Image Caption System for Intelligent Service Robotics Applications
    Luo, Ren C.
    Hsu, Yu-Ting
    Ye, Huan-Jun
    [J]. 2019 IEEE 28TH INTERNATIONAL SYMPOSIUM ON INDUSTRIAL ELECTRONICS (ISIE), 2019, : 1180 - 1185
  • [43] A Human Activity Recognition-Aware Framework Using Multi-modal Sensor Data Fusion
    Kwon, Eunjung
    Park, Hyunho
    Byon, Sungwon
    Jung, Eui-Suk
    Lee, Yong-Tae
    [J]. 2018 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS (ICCE), 2018,
  • [44] Context-Aware Deep Sequence Learning with Multi-View Factor Pooling for Time Series Classification
    Bhattacharjee, Sreyasee Das
    Tolone, William J.
    Elshambakey, Mohammed
    Cho, Isaac
    Mahabal, Ashish
    Djorgovski, George
    [J]. 2018 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2018, : 959 - 966
  • [45] Human-centric collaborative assembly system for large-scale space deployable mechanism driven by Digital Twins and wearable AR devices
    Liu, Xinyu
    Zheng, Lianyu
    Wang, Yiwei
    Yang, Weiwei
    Jiang, Zhengyuan
    Wang, Binbin
    Tao, Fei
    Li, Yun
    [J]. JOURNAL OF MANUFACTURING SYSTEMS, 2022, 65 : 720 - 742
  • [46] Detection and Analysis of Periodic Actions for Context-Aware Human Centric Cyber Physical System to Enable Adaptive Occupational Therapy
    Neshov, Nikolay
    Manolova, Agata
    Tonchev, Krasimir
    Boumbarov, Ognian
    [J]. PROCEEDINGS OF THE 2019 10TH IEEE INTERNATIONAL CONFERENCE ON INTELLIGENT DATA ACQUISITION AND ADVANCED COMPUTING SYSTEMS - TECHNOLOGY AND APPLICATIONS (IDAACS), VOL. 2, 2019, : 685 - 690
  • [47] Multi-modal human detection from aerial views by fast shape-aware clustering and classification
    Beleznai, Csaba
    Steininger, Daniel
    Croonen, Gerardus
    Broneder, Elisabeth
    [J]. 2018 10TH IAPR WORKSHOP ON PATTERN RECOGNITION IN REMOTE SENSING (PRRS), 2018,
  • [48] The HA4M dataset: Multi-Modal Monitoring of an assembly task for Human Action recognition in Manufacturing
    Grazia Cicirelli
    Roberto Marani
    Laura Romeo
    Manuel García Domínguez
    Jónathan Heras
    Anna G. Perri
    Tiziana D’Orazio
    [J]. Scientific Data, 9
  • [49] The HA4M dataset: Multi-Modal Monitoring of an assembly task for Human Action recognition in Manufacturing
    Cicirelli, Grazia
    Marani, Roberto
    Romeo, Laura
    Dominguez, Manuel Garcia
    Heras, Jonathan
    Perri, Anna G. G.
    D'Orazio, Tiziana
    [J]. SCIENTIFIC DATA, 2022, 9 (01)
  • [50] FAM3L: Feature-Aware Multi-Modal Metric Learning for Integrative Survival Analysis of Human Cancers
    Shao, Wei
    Liu, Jianxin
    Zuo, Yingli
    Qi, Shile
    Hong, Honghai
    Sheng, Jianpeng
    Zhu, Qi
    Zhang, Daoqiang
    [J]. IEEE TRANSACTIONS ON MEDICAL IMAGING, 2023, 42 (09) : 2552 - 2565